NerdHerd

silverSl!DE

Teammitglied
Mitglied seit
7. Juni 2005
Beiträge
14.700
Alter
39
Ort
Düren
Website
www.drivers-forum.de
Fahrzeug(e)
2006er BMW E90 325i
2011er BMW F25 X3
Garagenstellplatz1
[1]
Garagenstellplatz2
[2]
Hier der Thread für alle Rechnerbegeisterten. (reloaded quasi)
[article=http://spectrum.ieee.org/computing/it/is-there-a-liquid-fix-for-the-clouds-heavy-energy-footprint]
Is There a Liquid Fix for the Cloud’s Heavy Energy Footprint?
Cooling servers with liquids rather than air could greatly cut energy use, but it faces cultural barriers in commercial data centers
By Martin LaMonica
Posted 15 Jan 2014 | 21:04 GMT
01NWLiquidCoolGreenRevolutionCoolingCarnotJetDataCenters-1389817986216.jpg

Photo: Green Revolution Computing
Submarine Servers: Green Revolution Cooling's system submerges servers in a dielectric mineral oil that's better than air at keeping computers cool.
Asicminer, a Hong Kong–based bitcoin mining operation, has taken an unorthodox step to gain an advantage over other computing systems running the algorithms that generate the virtual currency. To save money on energy, Asicminer puts its servers in baths of oil to cool them.

The result? Asicminer’s 500-kilowatt computing system uses 97 percent less energy on cooling than if it employed a conventional method. Its custom-made racks hold computers that are submerged in tanks filled with dielectric oil that won’t damage the machines. The oil takes up the system’s heat, and inexpensive cooling equipment extracts the heat out of the oil, ultimately expelling it outside.

The bitcoin-mining facility is on the leading edge of a movement to use liquids to cool data centers. Operators of high-performance supercomputers have long understood that liquids trump air in the physics of heat removal. Because liquids are denser than gases, they are a more efficient medium to transport and remove unwanted heat.

Yet direct liquid cooling is a rarity in the corporate data centers that run bank transactions and the cloud facilities that serve data to smartphones. Data centers consume more than 1 percent of the world’s electricity and about 2 percent of the electricity in the United States. A third or more of that expenditure is for cooling. Given computing’s growing energy cost and environmental footprint, proponents say it’s just a matter of time before some form of liquid cooling wins out.

“Air cooling is such a goofy idea,” says Herb Zien, the CEO of LiquidCool Solutions, in Rochester, Minn., which makes immersion-cooling technology. “The problem is that there’s so much inertia and so much investment in the current system that it’s hard to turn back.”

Indeed, over the years many smart people have perfected the art of moving air around data centers for maximum efficiency. They have a number of techniques to choose from, such as setting up hot and cold aisles, using sensors to monitor conditions, and bringing in cold outdoor air for cooling. And the very idea of pumping fluids, especially water, into an expensive server rack requires a leap of faith that not all technology professionals are willing to take.

“Historically, the thinking has been that electronics and liquids don’t mix,” says Steven Hammond, the director of the Computational Science Center at the National Renewable Energy Laboratory (NREL), in Golden, Colo. “Everybody working in data centers is hydrophobic.” NREL flows water into its server racks to remove heat, eliminating the need for power-hungry air conditioners. In the colder months, pumps circulate the heated water to warm the laboratory building.

The average data center spends more than 30 percent of its energy bill just on cooling, making it a major cost to the Googles and Facebooks of the world. But liquid cooling, particularly immersion cooling or circulating water through server racks, has yet to make a big splash in the cloud. Microsoft, which operates more than a million servers worldwide, is sticking with air cooling because it’s proven and scalable, says Kushagra Vaid, general manager of cloud server engineering at Microsoft. “Cost of scaling is a big factor for Microsoft when considering new types of cooling methods,” Vaid says. “Our scale demands standardized and simplified techniques that are deployable across server environments and geographies.”

One maker of immersion cooling, Green Revolution Cooling, in Austin, Texas, claims that its system, in which servers are placed in a tank filled with mineral oil, is 60 percent cheaper than building and operating a new data center. But it does require a change in how data centers are installed and serviced. For example, server fans need to be removed, and technicians need to wear gloves when swapping out servers.

“[Immersion cooling] is interesting technology, but the real question is, How do I implement it in a data center?” says Ed Turkel, the group manager of high-performance computing marketing at HP. “What does a data center look like with these high-tech bathtubs with servers in them?”

The strongest need for liquid cooling is in situations where a lot of compute power is packed into a small space, experts say. The Asicminer system in Hong Kong, for instance, is compact enough to reside in a high-rise building, taking up one-tenth of the space it would if it were air-cooled.

But the trend in building cloud server farms has been the opposite: Locations are chosen for their cheap and plentiful electricity, which often places them in remote areas with plenty of space. “A lot of companies don’t care one iota about power density. If you’re Google, you just build another data center,” says Phil Tuma from the Electronics Markets Materials division of 3M, which makes high-tech liquids for immersion cooling.

In the future, though, data-center operators may want to place their computing power closer to users. There’s also increasing pressure from environmental groups to lower energy use from cloud data centers. Still, whether liquid cooling will break beyond its niche status remains an open question. “There’s a point where the technology stops being used by early adopters and starts being used by the early majority, and there’s a chasm in between,” says Matt Solomon, the marketing director at Green Revolution Cooling. “We’re just waiting for the domino effect.”
[/article]
 
Das ist ja geil. Bin ja mal gespannt ob sich das durchsetzt. Wäre was für CERN, dann bekommen die ihre Stromprobleme wenigstens in den Griff.
 
Blos blöd wenn man da mal was tauschen muss
da kann man die Server erst mal zum trocknen aufhängen...;)
 
[article=http://www.engineering.com/DesignerEdge/DesignerEdgeArticles/ArticleID/6676/Tianhe-2-Tops-Supercomputer-List.aspx]
Tianhe-2 Tops Supercomputer List
Kyle Maxey posted on November 20, 2013 | | 2241 views

Tianhe-2.jpg


For the second consecutive time China’s Tianhe-2, which translates to “Milky Way 2”, has topped the list for World’s Fastest Supercomputer.

Pegged at 33.86 petaflops, the National University of Defense Technology’s computer performed nearly twice as well as the runner-up, the US’s Titan supercomputer housed at Oak Ridge National Lab.

While China’s Tianhe-2 is an impressive piece of machinery, computing experts are torn as to whether the machines full potential is being put to use. Unlike the computers that you use all calculations that run on a supercomputer a custom made to work with the computers vast architecture.

According to reports Chinese computer scientists are still working on porting applications to the Tianhe-2. In contrast on the first day of operation Oak Ridge’s Titan had 5 applications ready to crunch simulation data for experiments ranging from combustion simulations to atmospheric modeling.

Although its possible that the Tianhe-2 isn’t currently realizing its full potential once developers get the right code in place the resolution and complexity of computer simulations will surely reach new heights.

Just in case you’re keeping score here’s the Top 10 List for most powerful supercomputer as compiled by Top500 on November 18, 2013.

1. Tianhe-2 (MilkyWay-2) - 33.86 petaflops
2. Titan - 17.59 petaflops
3. Sequoia - 17.17 petaflops
4. K computer - 10.51 petaflops
5. Mira - 8.59 petaflops
6. Piz Daint - 6.27 petaflops
7. Stampede - 5.17 petaflops
8. JUQUEEN - 5.01 petaflops
9. Vulcan - 4.29 petaflops
10. SuperMUC - 2.90 petaflops
[/article]
Weitere Bilder:
tianhe-2_01.png


tianhe2b-e7fb5c129f2d3f14.jpeg


Technik im Detail:

TH2_Fig_1.png


TH2_Fig_2.png


TH2_Fig_3.png


TH2_Fig_6.png


Quelle: http://www.netlib.org/utk/people/JackDongarra/PAPERS/tianhe-2-dongarra-report.pdf
 
Zuletzt bearbeitet:
Für alle, die mal vor einem Nicht bootenden system sitzen/stehen. Es sieht schlimmer aus, als es ist. einfach schritt für schritt durchgehen. :zwinker:BFTF.png
 
Zurück
Oben Unten