A supercomputer powered by Linux has retained its number one place in the world supercomputer rankings. Tianhe-2 – which translates as Milky Way 2 – was developed by China’s National University of Defence Technology.

Tianhe-2 can operate at 33.86 petaflop/s – the equivalent of 33,863 trillion calculations per second – according to a test called the Linpack benchmark.

There was only one change near the top of the leader board.

Switzerland’s new Piz Daint – with 6.27 petaflop/s – made sixth place.

The Top500 list is compiled twice-yearly by a team led by a professor from Germany’s University of Mannheim.

It measures how fast the computers can solve a special type of linear equation to determine their speed, but does not take account of other factors – such as how fast data can be transferred from one part of the system to another – which can also influence real-world performance.

IBM – which created five out of the 10 fastest supercomputers in the latest list – told the BBC it believed the way the list was calculated should now be updated, and would press for the change at a conference being held this week in Denver, Colorado.

“The Top500 has been a very useful tool in the past decades to try to have a single number that could be used to measure the performance and the evolution of high-performance computing,” said Dr Alessandro Curioni, head of the computational sciences department at IBM’s Zurich research lab.

“[But] today we need a more practical measurement that reflects the real use of these supercomputers based on their most important applications.

“We use supercomputers to solve real problems – to push science forward, to help innovation, and ultimately to make our lives better.

“So, one thing that myself and some of my colleagues will do is discuss with the Top500 organisers adding in new measurements.”

However, one of the list’s creators suggested the request would be denied.

“A very simple benchmark, like the Linpack, cannot reflect the reality of how many real application perform on today’s complex computer systems,” said Erich Strohmaier.

“More representative benchmarks have to be much more complex in their coding, their execution and how many aspects of their performance need to be recorded and published. This makes understanding their behaviour more difficult.

“Finding a good middle-ground between these extremes has proven to be very difficult, as unfortunately all previous attempts found critics from both camps and were not widely adopted.”

 

Read more:   http://www.bbc.co.uk/news/technology-24984320

 

 

 

Advertisements