HPC

Quantum Computing: Why You Should Pay Attention

bigstock–176610877

Quantum computers are coming

Quantum computing, which has been the subject of billions of dollars worth of research and theory, is a potential breakthrough in how computing is done today, and it holds the potential to greatly expand the capabilities of computing. However, there remain barriers to adoption, and it’s unclear how the field will progress. Furthermore, the very mathematics underlying quantum computers is difficult to fathom, even among those who understand the math and science well.

Quantum Mechanics

Richard Feynman, one of the most insightful, and pithiest, quantum physicists, famously quipped, “If you think you know quantum mechanics, you don’t know quantum mechanics.” The field of quantum mechanics is what prompted Einstein to exclaim, “God doesn’t play dice,” a natural reaction to those first introduced to the field when first encountering some of its implications. Typical computers run on electricity, a force that can seem mysterious but that can be understood through analogies and a bit of education. Quantum computers, on the other hand, rely on a field that requires up wrap our minds around concepts that seem utterly unfamiliar and sometimes even a bit unsettling. Fortunately, one doesn’t need to know much about quantum mechanics to get a basic grasp on quantum computing. Being able to accept one concept, the superposition, means the rest of quantum computing falls into place.

Typical computers rely on bits, which are represented by ones and zeros. Using just these two numbers, our computers can solve any arithmetic questions and have excellent logic capabilities. Quantum computers, on the other hand, replace bits with quantum bits, or qubits. Unlike their binary counterparts, qubits can exist as both ones and zeros at the same time, in a so-called superposition. This isn’t an analogy: According to the most common interpretation of quantum mechanics, qubits are actually ones and zero simultaneously. With this capability, qubits are able to solve certain problems that are computationally expensive using binary arithmetic and logic in far fewer steps, and some problems can be solved with just a single step. Although the very concept of quantum computing sounds outlandish, devices are being developed by tech giants including Intel and Google, and Microsoft is already unveiling toolkits for developing software for quantum computers. Startups are having an impact as well, with companies including Rigetti Computing seeking to forge a role in the technology.

Security Ramifications

Modern cryptography, which protects our passwords and keeps data encrypted, is based around using mathematics that are unrealistic to crack using modern computers. A strong encryption key, for example, might require something like the entirety of the world’s computation power working for millions of years to crack. With quantum computing, on the other hand, such security measures can be cracked in only a few steps. Developing new, quantum-proof security measures is difficult, and it means that virtually all data we assume to be safe from prying eyes may now be vulnerable to trivial hacking. The first government or military to develop robust quantum computing technology will have an incredible advantage over their counterparts still relying on traditional computing, and the advent of practical quantum computing will have tremendous geopolitical ramifications.

Practical Uses

One of the most practical uses of quantum computing will be searching. Using a technique called Grover’s algorithm, quantum computers have the ability to search through databases and other collections of data far faster than using traditional search techniques, and this algorithm scales far better than classic searching techniques. As databases continue to grow in size, this improvement can make it more feasible to handle the large volumes of data expected to come online in the coming years and decades as we reach physical limits in storage device latencies. Another practical advantage lies in our understanding of the world. Simulating quantum effects is notoriously difficult using the computers we rely on today, as the very fundamentals of quantum mechanics are vastly at odds with today’s devices. Using quantum computers, simulating these effects will be far simpler, allowing us to better unravel the mysteries of quantum mechanics.

Even when quantum computing becomes common, it’s difficult to envision it completely replacing traditional computer devices. The types of applications at which quantum computers excel don’t seem to have much practical use for typical computer users. Furthermore, it will take some time for quantum computers to become smaller and more affordable, and there may be barriers preventing its widespread use. Perhaps home quantum computing will be a reality at some point, but very few people would see significant benefits from the technology as we understand it today. Still, it’s worth bearing in mind early predictions about traditional computers, as some people said the same about the computer architecture most of us carry in our pockets or purses as smartphones. However, with the uncertainty surrounding quantum computing development timelines, it’s probably best to avoid expecting the revolution to take place by a particular date.

Summit: The World’s Fastest Supercomputer

Summit ONL

The battle for the world’s fastest supercomputer has a new victor: Summit

When rankings were unveiled in June of 2018, Summit ascended to the top of the list, displacing China’s Sunway TaihuLight.

According to IBM, Summit is able to achieve 200 petaflops of performance, or 200 quadrillion calculations per second. This power marks a significant gain on Sunway TaihuLight, which performs a still-staggering 87 petaflops. Summit holds more than 10 petabytes of RAM, and its funding came as part of a $325 million program funded by the United States Department of Energy. Each of Summit’s 4,608 nodes hold two IBM Power9 chips that run at 3.1 GHz. As with many new supercomputers, graphical processing units are also part of the design: Each node has six Nvidia Tesla V100 graphics chips, which help perform a small subset of calculations far faster than a traditional CPU can and perform especially well at many artificial intelligence tasks. The logistics of handling such a large installation are complex, as Summit sits on an eighth of an acre and requires 4,000 gallons of water to be pumped through every minute to prevent overheating. Summit’s software is built on Red Hat Enterprise Linux 7.4.

Located at Oak Ridge, Tennessee, Summit will be used for a range of tasks, including exploring new types of materials through simulations, attempting to uncover links between genetics and cancer, simulating fusion energy in an attempt to make it a feasible means of generating energy, and even simulating the universe to solve remaining astrophysical mysteries. Summit was designed to cover 30 different types of applications, making it a reasonably general-purpose supercomputer that will remain useful even when it’s eventually surpassed by future supercomputers.

A Boon to Science

A natural question that arises with supercomputers is whether they’re worth such large investments. After all, the internet makes it trivial to connect common computing equipment and create ad hoc systems capable of performing massive calculations. For some tasks, however, supercomputers provide capabilities other means of performing computational calculations cannot match, especially in the field of simulations. Weather simulations, for example, are notoriously complex, and much for what we know about weather today is the result of work done of massive supercomputers spanning multiple decades. Summit expands our capabilities, opening up new avenues of scientific exploration and potentially leading to breakthroughs in scientific fields.

In 2001, China had no devices that would meet the typical description of a supercomputer. Today, China dominates the Top500 list of supercomputers and is home to 202 of the top 500 devices, which moved the country ahead of the United States in November of 2017. China’s supercomputer aspirations are in line with some of China’s other goals; China want bragging rights about supercomputing power, but China also wants to be the world’s leader in artificial intelligence, including machine learning and other growing technologies, and supercomputer prowess will aid them in this mission.

The appropriately named Summit does give the United States bragging rights for now, as having the fastest supercomputer is a source of pride and demonstrates the ability of the US to lead in technology. However, in terms of total supercomputing power, China is still on top, and it seems it’s only a matter of time before Summit is dethroned.

Redefining the car: thriving in an combustion engine-less world as an auto manufacturer

Web

As we see more green-painted parking spaces around cities and all the more students and luxury-consumers drive electric vehicles – or, at least, plug-in hybrids, it is not surprising to wonder where the Internal Combustion Vehicles (ICV’s) will be in a decade or so

Despite European countries greatly differing in government incentives for citizens, the “second car” EV is an up-and-coming trend in families, while businesses increasingly adopt a fleet of both emission-free cars and vans.

In this article, we will leave out the debate of whether EV’s really are green, given that the majority of countries still make electricity from coal or oil, which is argued to defeat the grand purpose. Instead, we will analyse what the future of the automotive industry is expected to look like in a dramatically re-defined vehicle market and make suggestions for automotive manufacturers.

It might be humorous to find out that the electric vehicle was founded in the 1930’s and then marketed to females, as they were supposedly repelled by the complexity of the traditional car. Despite this not being the purpose of the product, the popularity died out as oil was not a scarce nor expensive resource and awareness of emissions was as good as absent. In 2018, the picture is different and apart from government pressures and new regulations, automotive company executives worry about the EV adoption among consumers. The simpler and more digital the car, the better – essentially erasing all “a car” ever meant to traditional car manufacturers. No wonder all are not quite yet keeping up with purely electric, high-tech models – however, this is suggested to be the win or lose factor in the market of the future, especially with a digital-native generation taking over the consumer profile which is of concern for traditional manufacturers not necessarily focusing on making their vehicles alike computers and smart phones.

Another aspect that givers automotive manufacturers a headache is price – price of development and testing, manufacturing and market price. Market share is taken through offering convenience in more aspects than no emissions and governments incentives. The break-through of the affordable EV BMW i3 and i8 has not been forgotten, and since then auto manufacturers are keeping up to all bring an electric affordable model. The technology to successfully create this is, however, of a great cost and many decide to keep themselves to a hybrid model in their fleet. Is this a smart strategy?

Research shows that this will work for only a limited amount of time. The public and state are greatly aware of local pollution and the global warming – more importantly, the public perception and trade policy are the driving factors behind the consumer’s choices.

In order to create vehicles affordable, efficient and superior to ICV’s, it is clear that technology is paramount not only to the vehicle itself but to the development of such. As has been observed in the case of the famous BMW i3, the agile and efficient R&D process (clusters colocated in Iceland for 18% of the running cost of Germany) led to their roaring success in timing, quality and market share. In 2017, auto manufacturers faced another tough challenge – increased CO2 emission transparency with the WLTP test. Once again, efficiency, aerodynamic design and smart driving (also autonomous, which is often the most economical on highways) are key features whether liked or not.

There is plenty of revenue in the EV market, too – if the manufacturing process is done right, that is to say. The luxury EV market has only expanded with the likes of Tesla, fully electric BMW iNext, Maybach and many more. Clearly, students and companies are not the only ones attracted to electrification. Once again, using technology such as HPC environments for testing, development, and efficiency evaluation cannot be understated. The physical prototype testing as we know it is gone in a market where development cycles are down to weeks and not months.

According to Pwc, the time to act for auto manufacturers is right now. The product portfolio and strategy – technology-wise and product-wise need to be revised. Organisational change needs to be implemented with the current success factors represented in the priorities of the business. Products need to be optimized – in an increasingly technologically complex development process – in order to meet the price requirements of consumers.

Despite much more being left to say and elaborate on when it comes to the switch of fuel for mobility and the dramatic turn in the auto-market making traditional cars as we know them ancient – there is one conclusion to be made for auto manufacturers. This is a development that benefits consumers as their choice is all the more important, but the competition on the other side is fierce.

There are many components to success in the new world, but the foundation has been observed to be a solid, strategic R&D plan and set up. Why? Because this is he new laboratory, where materials are explored, design is played with and safety is observed and improved. This is usually a challenging project with a “chicken-and-egg” problem as infrastructure or leasing of the hardware environments require a great deal of upfront financial commitment. I would like to suggest that this is the reason we see some auto manufacturers thrive and some die in today’s market conditions.

Since funds are to be used wisely, what other place than the optimal place on the planet to run up to 3 million homes worth of electricity (or more) from? The analysts are clear, and our experience has proven Iceland to be outstandingly efficient and cost-effective for either location, HPC-leasing or simply running data center operations. The combination of free cooling all year, more than decade-long independent green energy contracts and a highly motivated and skilled workforce has driven several leaders in the automotive industry to make smart R&D location choices. We see the results in VW’s and BMW’s futuristic visions and bold statements. It is not surprising – when you save up to 80% of the running costs compared to your domestic location, anything is truly possible.

Now – I must admit that I am biased, and that there are more factors than smart IT location to make an automotive business thrive in the next decade. This is by no means a magic recipe for a complete revolution. However, this is one of the larger components in strategy many miss out on evaluating, which leads to being behind in more than just car design. The time to act, evaluate and optimize is now – if auto manufacturers want to stay in the game, that is.

The journey starts through unleashing cash for innovation by doing nothing – only switching location and getting to real work without limitations. Find out more here


About the Author

Redefining the car: thriving in an combustion engine-less world as an auto manufacturer TechNativeThis article was written by Anastasia Alexandersdóttir, international business development and marketing manager at Opin Kerfi.

Opin Kerfi are partners of Cloud28+, the open community of over 600 innovative technology businesses, built to accelerate cloud adoption and digital transformation around the globe. It has members located across North America, EMEA, Latin America and Asia. Join free or find out more here.

Microsoft’s Full Stack Approach To Quantum Computing

Until now, computer advances have been incremental

Faster processors, extra RAM capabilities, and larger disk drives represent the main ways computers have evolved since they were first invented. However, as Moore’s Law reaches its full capabilities, a new technology is set to change how things compute forever: Quantum computing. Progress has been slow, but potential quantum computing architectures are being developed at a rapid pace, and experts predict the first quantum computers are close to reality.

Typical processors use binary mathematics and storage to process data at an extraordinary pace. Aside from experimental trinary computers, binary operations have been the only technology used to power computers until now. Quantum computing is different because it doesn’t rely on eschewing bits, which represent 0 or 1. Instead, so-called “qubits,” represent a superimposition of two states simultaneously. When operational, quantum computers will be able to solve certain computationally challenging tasks in a single step. Most current forms of computer encryption, for example, will be trivial to break once quantum computers are online. Advances in other fields like genomics, weather simulation and AI will be rapid as well, and the technology has the potential to run massive simulations and other tasks not feasible with binary computing. Microsoft UK’s Senior Director of Commercial Product Software Engineering believes quantum computing has the potential to solve some of the biggest challenges facing the world today.

If you think about global warming and drug discovery and new medicines, there are computational things needed to solve those problems which are beyond what we have with classical IT at the moment. Because quantum computing offers us a chance to solve those problems, it’s something we’re really interested in.

We recently caught up with him at their annual Future Decoded conference. Watch the first part of our interview above.

Playing the Long Game

Microsoft has long been a powerful force in programming. Altair Basic, an early Microsoft project, let early developers write code for the first personal computer: The Altair 8800. Microsoft’s programming tools, including Visual Studio, have been among the most popular tools for writing programs for Microsoft’s operating system and other platforms.

Microsoft have been investing in quantum computing research projects for over a decade. The company recently revealed their intention to take a full stack approach to the phenomenon and showcased progress it has made toward developing both a topological qubit and the ecosystem of hardware and software that will enable developers to harness the power of quantum computing. This includes launching an entirely new topological quantum computing language aimed at making the power of qubits accessible to developers.

Microsoft’s new programming language is designed to function in Visual Studio itself, and this early demonstration work aims to provide a seamless transition once quantum computers come to market. The familiar interface will help ease the burden of switching to the new type of thinking quantum computing requires. The new programming protocol provides a means of programming quantum computers without writing machine-specific code. Although the language currently lacks a name, it provides a comprehensive set of programming instructions that can power proposed type of quantum computer, even though such machines don’t yet exist.

As Rob explains in the second part of our interview, this lack of available technology has forced Microsoft to build its own.

Most programmers will find the new language to look somewhat familiar. The C-style design looks much like other popular programming languages, including C itself, C++, Java, and C#. However, new keywords riddle the sample code: Developers looking to take advantage of quantum computing will have to learn terms including “msqQ” and “SetQubit”. However, familiar IDE elements including colour coding, debugging, and code folding already have some support.

The programming language has little practical use now; it can run quantum computing simulations, but only at a slow speed. However, the early release does provide a glimpse of the future, and the language can serve as an educational tool to prep developers for the release of quantum computers. Microsoft promise the code in the simulations will be same when quantum compute power reaches Azure.  Once named, Microsoft’s new language might become the de facto means through which people think about and study quantum computing.

In addition to the new coding language, the company are developing their own infrastructure hardware to meet the demands of tomorrow’s quantum developers. Microsoft will release a free preview by the end of the year including a full suite of libraries and tutorials to enable developers to get started with ease.

How Do You Get ROI From Energy-Hungry HPC?

Green hi-tech user interface. Digital global speed technology concept, abstract background. 3d rendering

Want to ensure a return from your power-intense High Performance Computing? Do it in Iceland

Cross industry enterprise HPC adoption is on the rise – all from reducing and cost cutting R&D processes, prototype testing and complex calculations – there are more reasons to take advantage of this products than not. In fact, businesses that have experienced the edge of massive compute repeatedly report that this adoption is now an essential part of their success.

Thorsteinn Gunnarson, CEO of Opin Kerfi in Iceland, explained to us why exactly companies decide to invest in their own HPC environments or consume HPC as a Service and what impact it has on their ROI. For example, a well-known auto manufacturer with their HPC cluster in Iceland today build a new prototype every two weeks – dramatically shortening their time to market and putting them at a tremendous competitive advantage.

Thorsteinn also adds how only the power and agility of High Performance Computing can enable us to cope and make sense out of data, real-time information and create business value in retail, public sector and improve everyday life especially in cities, powered by IoT through smart apps and real-time analytics and solutions – such as parking house availability updates and traffic alerts. HPC also makes real-time promotions possible through bringing understanding of individual buying behavior and preferences.

The two outstanding factors, however, are the cost- and time cutting which the conversation comes back to – and indeed – these are the two most pressing issues for any business cross-industry.

Opin Kerfi‘s proposition in HPC lies in the combination of processing power and a strategic, sustainable location. Operating own equipment or consuming as a Service from Opin Kerfi means dramatic cost reduction compared to an equivalent product elsewhere, as the energy is sourced from completely sustainable, natural sources – guaranteeing lowest price possible and together with free cooling of the datacenter this brings the running outlay down to a minimum.

The power-demanding technology is not a problem in Iceland – with energy accessibility, top-class security and optimal climate for just these pursuits, high performance computing thrives with Opin Kerfi. Not to forget, Iceland’s population and business environment boast a welcoming profile for datacenter and HPC establishments with relevant expertise access and high education per capita.

Listen to our chat with Mr Gunnarson below or on  Apple Podcasts.


Opin Kerfi are partners of Cloud28+, the open community of over 600 innovative technology businesses, built to accelerate cloud adoption and digital transformation around the globe. It has members located across North America, EMEA, Latin America and Asia. Join free or find out more here.

Using HPCaaS For Your Next Breakthrough Fearlessly – The Options Are Limited

Data technology abstract futuristic illustration . Low poly shape with connecting dots and lines on dark background. 3D rendering . Big data visualization .

Massive compute, as a service, at the lowest price possible for you and the environment. Sounds too good to be true? Hosted in a military-safe location, hosting your HPC in Iceland is a no-brainer. Let’s analyse why you would need HPC in the first place, and where the Return in Investment is Highest.

It is no longer an argument, but purely facts that the way information technology is used is, in fact, the deciding factor to business success in any industry.

With the compute power available, we can find problem solutions earlier impossible – however, these cannot be solved with a standard workstation in a reasonable amount of time. Using dedicated high-end hardware with incredible speed, prototypes are built and tested virtually using algorithms and calculations simultaneously processed. This has led to exceptionally leaner R&D processes and accelerated time to market in e.g. the automotive and manufacturing industry. Instead of testing prototypes from the beginning, high-performance computing enables filtering of all the prototypes and suggested models to be tested using sophisticated compute and software, to later drill down to the best performing models to be physically produced and tested. In the tyre industry, to bring another example, thread tear is now no longer a physical test over months but a calculation possible and accurate with the use of massive compute. Goodyear boasts the improvement and performance of their all-season tyres due their large HPC investment, reducing the R&D budget from 40% to 15%.

Beyond the traditional industries, HPC now enables AI to go so far as to create its own solutions and surpass human intelligence in the complexity of poker by creating its own strategy, only because the compute power is available.

A myriad of industries have discovered and are still discovering the power of HPC and how it can not only solve product development problems but also sharpen their market intelligence and competitive edge. As a matter of fact, 97% of businesses that have adopted HPC in some way claim they would not be able to survive without it. Not only does it seem like an insightful pursuit, but pretty lucrative, too: IDC estimates that for every $43 spent on HPC, users can generate $515 in revenue. Interestingly enough, marketing intelligence agencies are taking off using HPC to process and make sense out of the data created by the use of devices at present, which creates understanding of consumer behaviour and preferences in a whole new way.

Using HPCaaS For Your Next Breakthrough Fearlessly  - The Options Are Limited TechNative

As businesses face the digital-native generation, only HPC has the power to not only understand big data and behaviour, but also develop apps and services that the target audience assumes to be simple and seamless.
If you, as a reader, have been surprised by the accuracy of intelligent predictions when shopping online or found great satisfaction in the simplicity of your banking app, chances are that the service providers invested in high-performance computing platforms and necessary software to make this happen.

There are different opinions on whether data is the new oil, as the amount and complexity of data that is produced is ahead of the apps that interpret it. However, it is evident that the digital-native generation is impatient when testing new products and quick to condemn the source organisation even encountering the smallest flaw – to minimise these risks, massive compute needs to be running for product development as well as live for flexibility and agility of the experience.

Having explained the unprecedented value of HPC for any industry, another question inevitably arises: how much needs to be invested in such solutions? What is the long-term use of these, and how to keep up with this technology becoming more advanced?

To meet the needs of various customers, HPC as a Service emerged, meeting the compute demand for sporadic tasks and time-bound projects, as opposed to having costly hardware gathering dust in between uses. It makes sense for a moving firm to buy a van to perform tasks, but very few families invested in their own van for this purpose and let it wait until the next big move. In the same way, HPCaaS is simply brilliant. The resource is there for as long as you need, the price reflecting a service and not a legacy of hardware.

What is there not to like? In the past few years a number of service providers have created an HPCaaS market. Though the price might be attractive, another – less visible – cost dominates the business landscape as a whole.  For supercomputers (or servers as a whole) most of the running cost goes into cooling the machines as they consume energy and generate astounding amounts of heat. This makes datacenter locations a strategic factor for service-providers as colder weather conditions bring down the costs for both sides. Additionally, as CO2 neutrality is about to become a business imperative, it is essential when choosing an HPCaaS service-provider to ensure sustainable energy sourcing. After all, no one wants surprisingly high expenses and the need to change contracts compromising business continuity.

Where can best Return on Investment for HPC be found?

IT is, at the end of the day, an expense – with massive compute being an even higher one.
When it comes to the investment part, most prefer an independent opinion. With the clear picture of HPC being an opportunity to break through, on-demand model available and sustainability/carbon-neutrality as an upcoming legislation, it seems like the options are quite limited – especially as no budget is limitless even for such a lucrative pursuit.
Analysts such as IDC, PWC and Cushman&Wakefield agree: Iceland stands as if created to be a place to host energy-demanding, heat-producing machines with the low fluctuations in temperature and cool climate. The best factor, however, is that this brings costs down for the company in question compared to HPC environments hosted or consumed from elsewhere. Especially high-performance computing is relevant, as this is a growing model for success.

Highly industrialised countries in Europe offer very little energy surplus, and are a poor prospect for expansion of datacenters for an affordable price. Iceland, on the other hand, proudly sources 100% of their electricity from water and geothermal energy. Combined with a highly reliable energy grid and top-performance connection to Europe and North America (both of which have capacity to carry much more than today), Iceland’s profile distinguishes from other popular locations.

Using HPCaaS For Your Next Breakthrough Fearlessly  - The Options Are Limited TechNative

Expansion without limitations: forget the uncertain availability and fluctuating prices of unsustainable energy and unexpected carbon tax bills.
While enterprises face uncertainty, especially as datacenters or services increase in importance for e.g. service and product delivery, hosting and consuming from Iceland-based datacenters puts you ahead of the present and forthcoming constraints especially if you are based in Europe. Future-proof your investment now and skip the disruption of changing IT-service provider or carbon tax bills. Forward-thinking this way puts you in a different gear, focusing on development and growth rather than reactive protection of your business operations

Fascinating outcomes have been witnessed by using HPCaaS and/or dedicated HPC for innovation and product development: faster time to market as prototypes are tested virtually, cost-savings through understanding energy-efficiency of engines through algorhytms and calculations as opposed to physical trials. The opportunities are as many as your latest ideas, free to become reality.

Institutions hosting their HPC whether as-a-service or a permanent environment already reap the outcome of this decision – delighted by the results, truly optimising their business with no limits in mind.

Opin Kerfi’s HPCaaS offering is proudly sustainable and, consequently, highly economical, guaranteeing business continuity with forthcoming CO2 neutrality legislation. The cost- and technology barrier is longer an issue, meaning enterprises can expand and innovate fearlessly.


Opin Kerfi are partners of Cloud28+, the open community of over 600 IT companies and research institutions, built to accelerate cloud adoption and digital transformation around the globe. It has members located across North America, EMEA, Latin America and Asia. Join free or find out more here.

How Do You Prepare A Supercomputer For Space Travel?

International Space Station and astronaut.

A new experiment by Hewlett Packard Enterprise & NASA hopes to prove off-the-shelf HPCs are space-ready

The first components of the International Space Station first launched in 1998, and the station has transformed radically since that time, gaining new components and replacing outdated equipment. Computers have always placed a significant role on the ship, and NASA has decided it’s time for something new: The first supercomputer in space designed with commercial off-the-shelf equipment, designed by Hewlett-Packard Enterprise.

Terrestrial vs Extraterrestrial Computers

Spaceflight represents the cutting edge of technology. However, the computers used in spacecraft are often fairly primitive, sometimes a generation or two behind the latest models on earth. This is because any computer sent into space has to go through a hardening process. Space computers need to handle the rigors of space travel, including radiation that can damage circuits and other critical components. They also need to survive launches, which can be harsh. The term “hardening” refers to ensuring computer components can operate reliably.

Hardening

The ISS’s supercomputer will be hardened in a new manner: Through software. Historically, computers have been hardened through novel insulation methods and, most importantly, through redundant circuits. Realizing that modern off-the-shelf hardware is already somewhat hardened, HPE investigated using software to slow down operations when adverse conditions are detected, which will prevent glitches related to solar flares and other radiation. This approach has the potential to change how computers in space are used by cutting costs and development time.

Hardening can “take years” says  HPE’s Dr. Mark R. Fernandez, the mission’s co-principal investigator for software and HPC technology officer.

Since you will be sending up into space 2 or 3 or possibly older technology, so by the time it would take a year to get to Mars and the multiple years it would be there, then the years to return you could be using really antiquated equipment

We spoke to Dr Fernandez in depth about the mission, its tests and hazards. Listen to the podcast below or on Apple Podcasts.

Compute in Space

The ISS’s first supercomputer follows a long history of computers used in space. Perhaps the most famous example is the Apollo Guidance Computer, used for navigation and to guide and control spacecraft. The computer was advanced for its time; its 2.048 MHz processor could crunch plenty of numbers for spaceflight, and its 16-bit word length was advanced. However, the computer weighed nearly 70 pounds, and its performance would be eclipsed by the first generation of personal computers released only a decade later.

How Do You Prepare A Supercomputer For Space Travel? TechNative
The ISS (Image: NASA)

A New Standard in Power

Running off of HPE’s Apollo 40 two-socket servers, the ISS’s new computer will be powered by Intel Broadwell processors. Combined, its processors will provide more than 1 teraflops of power, placing it firmly within the supercomputer category. While the teraflop barrier was first broken on earth in 1997, HPE’s computer takes up far less space and uses significantly less power; the first computer to break the teraflop barrier relied on more than 7,000 nodes, while HPE’s extraterrestrial computer relies on only two and takes up just a quarter of a rack compared to 104 cabinets.

Overcoming Challenges

In addition to ensuring components are properly hardened for space, HPE had to work with NASA to overcome several challenges. While the space station uses solar-powered 48-volt DC power, the off-the-shelf computing components used require 110-volts of AC power. An inverter supplied by NASA filled this gap. Even more challenging was cooling, as typical server fans aren’t appropriate for use on the space station, and expert engineering was needed to adapt standard server equipment to the space station’s unique water cooling system. Of course, computer engineers won’t be aboard to install the computer, so it had to ship with detailed instructions for installation and maintenance to be performed by non-experts.

A Worthwhile Experiment

Although having a supercomputer is always useful, the ISS doesn’t need it to function. However, the data collected while the supercomputer remains onboard will help determine if new hardening techniques are appropriate for long-term use. Furthermore, knowledge learned can help improve Linux, which powers nearly all of the world’s supercomputers and a strong majority of web servers. Hardened computers aren’t only needed for space, as military and nuclear uses require hardening as well. As with all research, it’s difficult to predict the outcome, but being able to harden off-the-shelf devices through software can have a profound impact in a range of fields.