We use computers every day. An experience 21st century revolutionary movement that has
affordable complex calos, large quantities information and rapid communication, much
of the times in a device in the palm of your hand. At great speed computers have
almost doubled with each passing year. But with great powers comes great responsibilities.
The need to perform calculations every time more complex has grown, and with that the
of computing. I do not know about you but for me, the transistor
is the greatest invention in the history of mankind. This little invention that once was so
big as an eraser, today they are so to the point where they could fit billions
nail. Such a transistor is capable of and shut down more than 300 billion times per
second. To give you an idea, a Core i7 Broadwell-E
has 3.2 billion transistors. But, this is nothing when you need to do
a weather forecast, simulate nuclear tests, model the cells of the human body, simulate
astrophysical environments, and even simulate the human brain.
In this video we will see some of the supercomputers of the present time.
Komm Mit Mir! Current computers can do countless
of the fundamental operations of mathematics, and because of this, a more efficient metric
was created to measure the number of operations that a computer can do. It's called FLOPS,
Floating Point Operations Per Second. Recalling that with FLOPS, we also use
the multiple prefixes that you probably They know mega, giga, tera, peta, and so on.
I want you to forget for now something related to quantum computers. a
step at a time, personal. We will see now some of today's largest computers
and their FLOPS. As homework I ask that At the end of this video, you search the internet
on your computer, and leave it here in the comments of the video, how many FLOPS you have available.
Fujitsu's K Fujitu's K was the first supercomputer
to have broken the ten petaFLOPS barrier in November 2011. OK in your name refers to
to the Japanese word "kei", or 10 quadrillion - a reference to the number of FLOPS. For
calculate at this level, K combines the power of 80,000 CPUs separated by connectors
designed to transmit data at high speeds. A system of
water cooling makes the CPUs less likely to overheat.
Oakforest-PACS A collaboration between the University of
Tokyo, Tsukuba University and Fujistu Limited resulted in the supercomputer called
Oakforest-PACS, which broke the barrier of 25 PetaFLOP, thanks to a modern generation
of Intel Xeon Phi processors, making it the fastest supercomputer in Japan.
Composed of 8,208 computational nodes, it is used to deepen research in science
and teach young researchers how to perform high-performance computing.
Cori (NERSC) The National Energy Research Scientific Computing
Center, near Oakland, California, named his newest "Cori" supercomputer, in
homage to Gerty Cori, the first woman to win a Nobel Prize. This one
computer uses a multi-processor architecture called the Cray XC40, manufactured by
by Cray, a company responsible for large advances in supercomputer performance
during the 1970s. Cori can, theoretically, achieve a processing speed of
29.1 petaFLOPS. He can do this through of the Haswell aquifer of Intel processors
Xeon, and Xeon Phi. Sequoia
Sequoia is a supercomputer built to measure the risks of nuclear war by
scientific calculations of advanced weapons. It is owned by Lawrence Livermore National
Laboratory in California. With 98,304 knots is currently classified
as the sixth most powerful supercomputer of the planet in TOP500 Ranking. According
the Linpack benchmark, it has a speed of 17.2 petaFLOPS. I make my videos here
to the channel in a third-generation i3, which has two processing cores
and 8 GB of RAM. Sequoia has 1,572,864 processing cores and 1.5 petabyte of
RAM memory. Titan
This is perhaps one of the most known in the Western world. Titan is
of the Oak Ridge National Laboratory of Tennessee in the US and was the fastest supercomputer
of the planet until the arrival of Tianhe-2 in 2013. Titan is the first supercomputer to be combined
the AMD Opteron CPUs and the NVIDIA Tesla GPUs, bringing in theoretical maximum performance of
27 petaFLOPS, being in measurement tests with 17.6 petaFLOPS. This is the power that allows
the researchers to carry out the simulations necessary for climate prediction,
astrophysics and molecular physics. Tianhe-2
The Tianhe-2, also known as MILKYWAY-2, yes, that's right, Milky Way, it's a supercomputer
developed by the National University of Defense Technology of China. It became the
world's fastest supercomputer in June of 2013 with a maximum performance of 33.86
petaFLOPS, its theoretical maximum performance being much bigger. It has 16,000 computational nodes,
made up of Intel Ivy Bridge processors and Xeon Phi that allow application simulations
government security and also serves as the an open research platform for scientists
in southern China. Piz Daint
At the end of 2016, the supercomputer Piz Daint in Lugano, Switzerland, has gained a major upgrade
hardware tripling its computational performance and bringing its maximum performance in tests
measuring up to 19.6 petaFLOPS, making it the fastest supercomputer outside of Asia.
Named after the name of a mountain in the Alps Piz Daint creates
and high-resolution image simulations. resolution. Soon, it will provide power
processing at Hadron's Great Collider at CERN, helping him analyze huge quantities of
of data. Sunway TaihuLight
Currently ranked as the supercomputer the world's fastest. Sunway TaihuLight has
theoretical maximum of 125 petaFLOPS, but being 93 petaFLOPS through tests
measurement, about five times faster that the supercomputer second, the
Tianhe-2. Installed at the National Supercomputing in Wuxi, China, is composed of
for 10.6 million nuclei and is being used for climate research, modeling
of land systems and data analysis. In addition to being the fastest supercomputer
of the world, the Sunway TaihuLight classified as the most efficient room
in terms of energy, requiring substantially less energy megawatts per megaFLOPS.
It's not over yet, Japan is building the world's fastest supercomputer, which
maybe make japan the new global intelligence research center
artificial. ABCI - Artificial Intelligence Bridging Cloud
Infrastructure The supercomputer is expected to operate at
130 petaflops, we're talking about of 130 quadrillion calculations per second.
Satoshi Sekiguchi, Director General of the Institute National Industrial Science and Technology
Japan's Advanced explains that this supercomputer can help in the development for example
autonomous cars, but those that do not have drivers, robotics and diagnostics
doctors. But now it's your turn, how many FLOPS
computer can perform? There is a program very simple and small called QwikMark 0.4
which makes this measurement. I'll leave the link to video description.
Until the construction of ABCI, share this video on social networks, let your like,
subscribe to the channel and watch other videos if you have not yet attended.
Auf Wiedersehen!
Không có nhận xét nào:
Đăng nhận xét