Archive for the ‘Quantum Computer’ Category

Now Is the Time to Plan for Post-Quantum Cryptography – DARKReading

RSA CONFERENCE 2022 Even the most future-facing panels at this year's RSA Conference are grounded in the lessons of the past. At the post-quantum cryptography keynote "Wells Fargo PQC Program: The Five Ws," the moderator evoked the upheaval from RSAC 1999 when a team from Electronic Frontier Foundation and Distributed.net broke the Data Encryption Standard (DES) in less than a day.

"We're trying to avoid the scramble" when classical cryptography techniques like elliptic curve and the RSA algorithm inevitably fall to quantum decrypting, said Sam Phillips, chief architect for information security architecture at Wells Fargo. And he set up the high stakes encryption battles often have: "Where were all the DES implemented? Hint: ATM machines."

"We had to set up teams to see where all we were using[was DES] and then establish the migration plan based upon using a risk-based approach," Phillips said. "We're trying to avoid that by really trying to get ahead of the game and do some planning in this case."

Phillips was joined on stage by Dale Miller, chief architect of information security architecture at Wells Fargo, and Richard Toohey, technology analyst at Wells Fargo.

Toohey, a doctoral candidate at Cornell University, handled most of the technical aspects of quantum computing during the panel.

"For most problems, if you have a quantum calculator and a regular calculator, they can add numbers just as well," he explained. "There's a very small subset of problems that are classically very hard, but for a quantum computer, they can solve [them] very efficiently."

These problems are called np-hard problems.

"A lot of cryptography, specifically in asymmetric cryptography, relies on these np-hard type problems things like elliptic curve cryptography, the RSA algorithm, famously and when quantum computers are developed enough, they'll be able to brute-force their way through these," Toohey explained. "So that breaks a lot of our modern classical cryptography."

The reason why we don't have crypto-breaking quantum computers today, despite headline-making offerings from IBM and others, is because the technology to reach that level of power has not been accomplished yet.

"To become a cryptographically relevant quantum computer, a quantum computer needs to have about 1 to 10 million logical qubits, and those logical qubits all need to be made up of about 1,000 physical qubits," Toohey said. "Today, right now, the largest quantum computers are somewhere around 120 physical qubits."

He estimated that to even muster the first logical qubit will take three years, and from there it has to scale up to "a million or so logical qubits. So it's still quite a few years away."

Another technical challenge that needs solving before we get these powerful quantum computers is the cooling systems they require.

"Qubits are incredibly sensitive; most of them have to be held at very low, cryogenic temperatures," Toohey explained. "So because of that, quantum computing architecture is incredibly expensive right now."

Other problems include decoherence and error correction. The panel agreed that the combination of these issues means crypto-cracking quantum computers are eight to10 years away. But that doesn't mean we have a decade to address PQC.

The panel was named for the journalistic model of five questions that start with the letter "w," but that didn't come up until late in the audience Q&A portion.

"Sam was asking the what, the who, the why, the where, and the when," Miller said. "So I think we've covered that in our conversations here."

Most of the titular questions were somewhat vague and a matter of judgment. However, on the concept of when you should start planning for the post-quantum future, there was complete agreement: Now.

"You've got to start the process now, and you have to move yourself forward so that you are ready when a quantum computer comes along," Miller said.

Phillips concurred.

"There is not right now a quantum computer that is commercially viable, but the amount of money and effort going into the work is there to move it forward, because people recognize the benefits that are there, and we are recognizing the risk," he said. "We feel that it's an eventuality, that we don't know the exact time, and we don't know when it'll happen."

Toohey suggested beginning preparations with a crypto inventory again, now.

"Discover where you have instances of certain algorithms or certain types of cryptography, because how many people were using Log4j and had no idea because it was buried so deep?" he said. "That's a big ask, to know every type of cryptography used throughout your business with all your third parties that's not trivial. That's a lot of work, and that's going to need to be started now."

Wells Fargo has a goal to beready to run post-quantum cryptography in five uears, which Miller described as"a very aggressive goal."

"So the time to start is now," he said,"and that's one of the most important takeaways from this get-together."

Pivoting is a key marker of agility for the panel, and agility is vital for being able to react to not just quantum threats, but whatever comes next.

"The goal here should be crypto agility, where you're able to modify your algorithms fairly quickly across your enterprise and be able to counter a quantum-based attack," Miller said. "And I'm really not thinking on a day-to-day basis about when is the quantum computer going to get here. For us, it's more about laying a path and a track for quantum resiliency for the organization."

Toomey agreed about the importance of agility.

"Whether it's a quantum computer or new developments in classical computing, we don't want to be put in a position where it takes us 10 years to do any kind of cryptographic transition," he said. "We want to be able to pivot and adapt to the market as new threats come out."

Because there will be computers that can break current cryptography techniques, organizations do need to develop new encryption methods that stand up to quantum brute-force attacks. But that's only the half of it.

"Don't just focus on the algorithms," Phillips said. "Start looking at your data. What data are you transiting back and forth? And look at devaluing that data. Where do you need to have that confidential information, and what can you do to remove that from the exposure? It will help a lot not only in the crypto efforts, but in terms of who has access to the data and why they have to have access."

One open question loomed over the discussion: When would NIST announce its picks for the new standards to develop for post-quantum cryptography? The answer: Not yet. But the uncertainty is no cause for inaction, Miller said.

"So NIST will continue to work with other vendors and other companies and research groups to look at algorithms that are further out there," he said. "Our job is to be able to allow those algorithms to come into place quickly, in a very orderly manner, without disrupting business or breaking your business processes and [to] be able to keep things moving along."

Phillips agreed. "That's one of the reasons for pushing on plug and play," he said. "Because we know that the first set of algorithms that come out may not satisfy the long-term need, and we don't want to keep jumping through these hoops every time somebody goes through it."

Toohey tied the standards question back into the concept of preparing now.

"That way, when NIST finally finishes publishing their recommendations, and standards get developed in the coming years, we're ready as an industry to be able to take that and tackle it," he said."That's going back to crypto agility and this mindset that we need to be able to plug and play. We need to be able to pivot as an industry very quickly to new and developing threats."

Here is the original post:
Now Is the Time to Plan for Post-Quantum Cryptography - DARKReading

I beheld a quantum computer. It was weird and excellent. – Stuff

IBM

IBM scientist Andreas Fuhrer looks at the cryogenic refrigerator which keeps a quantum computers qubits super cold.

Peter Griffin is a freelance science and technology writer. He was the founding director of the Science Media Centre and founding editor of Sciblogs.co.nz

OPINION: You have to hand it to the likes of Niels Bohr, Werner Heisenberg and Erwin Schrdinger, scientists who were instrumental in developing the field of quantum mechanics about 100 years ago.

They had their work cut out for them trying to explain to a sceptical public the forces that dictate how the world works on the atomic and subatomic scale.

Even Albert Einstein whose own discoveries were towering reference points for these scientists could never reconcile that quantum measurements and observations are fundamentally random.

"It is this view against which my instinct revolts," he wrote in 1945.

READ MORE:* What does Google's Quantum Supremacy actually mean?* World-first experiment introducing atoms to one another may be key to next 'quantum revolution'* Quantum computer a possibility in 10 years* The ultimate geek pilgrimage* A computer 100m times faster than yours

Weve learned much about quantum mechanics since then, including how the principles of superposition and entanglement explain how information can be processed in ways computers like our laptops and smartphones cant match.

Last week I stood for the first time in front of a fully functioning quantum computer, IBMs Quantum System One, at the companys research labs in Yorktown Heights, New York.

The machine looks like a beautiful gold chandelier shrouded in a metal case that creates a vacuum in which the whole device is chilled to just above absolute zero, as cold as outer space.

The highly controlled conditions are required to eliminate interference that could prevent the quantum chip at the tip of the chandelier from doing its thing, which is to activate qubits the quantum version of the bits, the digital ones and zeros our binary computers work with.

IBM, Google, Microsoft and numerous other companies and research institutions have demonstrated how quantum computers are very good at a narrow range of computational tasks, such as simulating nature. Thats already seen them put to work modelling molecules and in the complex field of materials science.

ROBERT KITCHIN/Stuff

Stuff science columnist Peter Griffin.

Programmers are now working on computer algorithms to expand the ways in which quantum computers can be used. Cryptography experts think large quantum computers could crack existing encryption systems, which would cause a cybersecurity nightmare.

But quantum computers will need to scale up massively in power and be less prone to errors to be useful more broadly. IBM last year produced Eagle, a 127-qubit processor for its quantum computer and plans to introduce Osprey, its 433-qubit chip this year.

Eventually machines with hundreds of thousands or millions of qubits could be available for number crunching on a scale weve never seen before.

It's unlikely youll ever have a quantum computer on your desk or in your garage. Instead, IBM and its rivals rent access to their quantum computers as a cloud computing service.

Todays regular computers arent heading for the dustbin either. They are better at a wide range of tasks and can work in tandem to make quantum computers more useful.

Its unclear whether quantum computing can be properly applied to solving the big problems facing the world new antibiotics or climate change.

But the blistering pace of technical progress suggests it's a field heating up and one worth watching.

Read the original:
I beheld a quantum computer. It was weird and excellent. - Stuff

How Zapata and Andretti Motorsport Will Use Quantum Computing to Gain an Edge at the Indianapolis 500 – Quantum Computing Report

You might think that auto racing would not be a good application for quantum computing because the teams consist of grease monkeys who may know auto mechanics but wouldnt know how to leverage advanced computing. But you would be wrong.

Auto racing is a big business where there can be a very thin line between success and failure. To give you an idea of how small things can make a big difference you can look at the results of the 2015 Indianapolis 500. In that race, the difference in finishing time between first place finisher Juan Pablo Montoya and second place finisher Will Power was 104.6 milliseconds. And those 104.6 millisecond made the difference between winning a first-place prize of $2.44 million or not.

It turns out that an auto race generates a lot of data, about 1 Terabyte per car in a typical race, that if analyzed and used wisely can help give a racing team a critical edge. To that end, Zapata Computing and Andretti Motorsports formed a partnership earlier this year to work together on race analytics and see how they could use Zapatas advanced analytics, quantum techniques, and Orquestra hybrid classical/quantum data and workflow manager to win more races.

Although this work between the two companies has just started, a big event for both companies will occur this weekend with the 2022 Indianapolis 500 race. We talked with Chris Savoie, CEO of Zapata Computing, and he described three of the first use cases where they believe advanced analytics, machine learning, and quantum computing can potentially make a difference.

Tire Degradation Analysis

When you have a car going at over 200 MPH, the tires wear out very quickly. In a typical Indianapolis 500 race, the tires can be changed 5 or more times and require time wasting pit stops to accomplish. Whats more the tires have different characteristics when they are just put on and when they have been used a while. So, the racing manager has a lot of strategic variable juggle. When should the car be called in for a pit stop to change the tires, which set of tires should they put on the car, and how many tire changes should they have, and what is the current weather and track conditions? For a data analyst, this is a large optimization problem and will be one of the first areas that Zapata will work on with Andretti to create a ML model that can help guide these decisions using data collected in previous race sessions as well as data collected in real time during the race.

Fuel Savings Opportunities

Cars need to be refueled during the race. In addition, the driver has some control over the fuel consumption by the way he drives. If a racing team can find a way to minimize the number of refuelings and avoid a pit stop, it can save a lot of time. Whats more you dont want to cross the finish line with a full tank because they would be a waste. In the 2016 race, driver Alexander Rossi took a gamble and decided not to go for a final pit stop to refuel with 33 laps to go. It turns out he ran out of gas at the very end and coasted across the finish line. But he won the race because the second-place guy did decide to refuel and the extra pit stop time cost him the race. So, finding ways to improve fuel consumption and determine the best timing for refueling also turns out to be an optimization problem that may an opportunity to use machine learning and advanced analytics to find the best solution and improve race performance.

Yellow Flag Predictive Modelling

A yellow flag during the race occurs when an accident occurs or there is debris on the track. Drivers are required to reduce their speed and passing another car is prohibited. One of the impacts of this, is that the relative lead of one car over another is reduce. But it may also be a good time to go in for a pit stop since the cars arent going at full speed while the flag is on. If a racing team had a crystal ball and could predict when a yellow flag would occur, it could help them determine their best pit stop strategy. This may seem a little far-fetched but the Zapata/Andretti team will attempt to create a model for this that will be based upon conditions on the track, the status of the various cars in the cars, which particular drivers are in those cars, and other factors collected during the race. It will be interesting to us to see if they can actually create a useful model for when yellow flags may occur from this data.

From an operations standpoint, working in this environment can present some unique challenges. But it also provides learning opportunities for the Zapata team as they face real world challenges and find ways to solve them that can be used for future product enhancements and customer engagements in other areas. One of the first things to understand is the racing environment requires real time decisions and you do not want to use a quantum computer somewhere in the cloud on race day. The latencies will be too slow and you dont want to have to struggle with flaky Wi-Fi connections. So, Zapata and Andretti have set up an on-site Race Analytics Command Center as shown in the picture below.

Zapata and Andretti arent going to install a quantum computer in this trailer, but it will have a large amount of classical computing capability to help the team make real time decisions on race day. Machine learning applications are typically divided into a training session that develops the optimum coefficients for a model and an execution portion that just runs the model and provides an output based upon the previously setup coefficients. The training portion is the most computationally intensive portion of an ML model, they do not have to run in real time and is a good opportunity for leveraging quantum computing. Executing a model once it is created is not so computationally intensive and can be done on a classical processor. The team can feed in data from previous races and trial runs, create an ML model over many days or weeks, but then execute the ML model in real time on classical computers sitting in this trailer.

The collaboration between Zapata and Andretti goes much beyond leveraging quantum computing. The overall program will involve working with multiple data bases that could be resident with cloud providers, edge computing data coming in from various sensors, and managing workflows that are both classical and quantum in nature. Zapata will be using their Orquestra product to help manage all this.

This will be a long-term collaboration. Because the available quantum computers are not yet powerful enough to provide an advantage, the first implementations of this work will use quantum-inspired algorithms. However, the intent is that as the quantum processors become more powerful, these algorithms will eventually be moved for full quantum computers and allow the companies to create larger, more complex, and more accurate models to further their advantage. Andretti participates in many different types of auto racing and has many different teams. So, the two companies will have a lot of opportunities to try out and develop this capability. We also expect the companies will find additional use cases for leveraging advanced computing capabilities as they work together.

For additional information about this collaboration, a news release posted on the Zapata web site can be accessed here.

May 26, 2022

Read this article:
How Zapata and Andretti Motorsport Will Use Quantum Computing to Gain an Edge at the Indianapolis 500 - Quantum Computing Report

QuantWare and QuantrolOx Partner to Ease the Integration of the Control Software With the Hardware Device – Quantum Computing Report

QuantWare and QuantrolOx Partner to Ease the Integration of the Control Software With the Hardware Device

QuantWare is a company that provides QPU chips to customers who want to build up their own quantum computer. QuantrolOx is a company that provides automated machine learning based control software to provide optimum control of qubits. The two companies have announced a partnership to integrate QuantrolOx software with QuantWares hardware to create an open architecture quantum computer solution for customers who want to build their own machine. QuantWare asserts that by working with themselves and their partners a customer can create a quantum computer on their own for 1/10th the cost of purchasing a complete system from one of the hardware vendors. Additional information about this partnership can be found in a news release posted on the QuantWare website here.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Continue reading here:
QuantWare and QuantrolOx Partner to Ease the Integration of the Control Software With the Hardware Device - Quantum Computing Report

Q&A with Atos’ Eric Eppe, an HPCwire Person to Watch in 2022 – HPCwire

HPCwire presents our interview with Eric Eppe, head of portfolio & solutions, HPC & Quantum at Atos, and an HPCwire 2022 Person to Watch. In this exclusive Q&A, Eppe recounts Atos major milestones from the past year and previews whats in store for the year ahead. Exascale computing, quantum hybridization and decarbonization are focus areas for the company and having won five out of the seven EuroHPC system contracts, Atos is playing a big role in Europes sovereign technology plans. Eppe also shares his views on HPC trends whats going well and what needs to change and offers advice for the next-generation of HPC professionals.

Eric, congratulations on your selection as a 2022 HPCwire Person to Watch. Summarize the major milestones achieved last year for Atos in your division and briefly outline your HPC/AI/quantum agenda for 2022.

2021 was a strong year for Atos Big Data and Security teams, despite the pandemic. Atos BullSequana XH2000 was in its third year and was already exceeding all sales expectations. More than 100,000 top bin AMD CPUs were sold on this platform, and it made one of the first entries for AMD Epyc in the Top500.

We have not only won five out of seven EuroHPC petascale projects, but also delivered some of the most significant HPC systems. For example, we delivered one of largest climate studies and weather forecast systems in the world to the European Centre for Medium-Range Weather Forecasts (ECMWF). In addition, Atos delivered a full BullSequana XH2000 cluster to the German climate research center (DKRZ). 2021 was also the launch of Atos ThinkAI and the delivery of a number of very large AI systems such as WASP in Sweden.

2022 is the year in which we are preparing the future with our next-gen Atos BullSequana XH3000 supercomputer, a hybrid computing platform bringing together flexibility, performance and energy-efficiency. Announced recently in Paris, this goes along with the work that has started on hybrid computing frameworks to integrate AI and quantum accelerations with supercomputing workflows.

Sovereignty and sustainability were key themes at Atos launch of its exascale supercomputing architecture, the BullSequana XH3000. Please address in a couple paragraphs how Atos views these areas and why they are important.

This was a key point I mentioned during the supercomputers reveal. For Europe, the real question is should we indefinitely rely on foreign technologies to find new vaccines, develop autonomous electric vehicles, and find strategies to face climate changes?

The paradox is that Europe leads the semiconductor substrate and manufacturing markets (with Soitec and ASML) but has no European foundry in the <10nm class yet. It is participating in the European Processor Initiative (EPI) and will implement SiPearl technologies in the BullSequana XH3000, but it will take time to mature enough and replace other technologies.

Atos has built a full HPC business in less than 15 years, becoming number one in Europe and in the top four worldwide in the supercomputer segment, with its entire production localized in its French factory. We are heavily involved in all projects that are improving European sovereignty.

EU authorities are today standing a bit behind compared to how the USA and China regulations are managing large petascale or exascale procurements, as well as the difference between how funding flows to local companies developing HPC technologies. This is a major topic.

Atos has developed a significant amount of IP, ranging from supercomputing platforms, low latency networks, cooling technologies, software and AI, security and large manufacturing capabilities in France with sustainability and sovereignty as a guideline. We are partnering with a number of European companies, such as SiPearl, IQM, Pasqal, AQT, Graphcore, ARM, OVH and many labs, to continue building this European Sovereignty.

Atos has announced its intention to develop and support quantum accelerators. What is Atos quantum computing strategy?

Atos has taken a hardware-agnostic approach in crafting quantum-powered supercomputers and enabling end-user applications. Atos ambition is to be a major player in multiple domains amongst which are quantum programming and simulation, the next-generation quantum-powered supercomputers, consulting services, and of course, quantum-safe cybersecurity.Atos launched the Atos Quantum Learning Machine (QLM) in 2017, a quantum appliance emulating almost all target quantum processing units with abstractions to connect to real quantum computing hardware when available. We have been very successful with the QLM in large academics or research centers on all continents. In 2021, there was a shift of many commercial companies starting to work on real use cases, and the QLM is the best platform to start these projects without waiting for hardware to be available at scale.

Atos plays a central role in European-funded quantum computing projects. We are cooperating with NISC QPU makers to develop new technologies and increase their effectiveness in a hybrid computing scenario. This includes, but is not limited to, hybrid frameworks, containerization, parallelization, VQE, GPU usage and more.

Where do you see HPC headed? What trends and in particular emerging trends do you find most notable? Any areas you are concerned about, or identify as in need of more attention/investment?

As for upcoming trends in the world of supercomputing, I see a few low-noise trends. Some technological barriers that may trigger drastic changes, and some arising technologies that may have large impacts on how we do HPC in the future. Most players, and Atos more specifically, are looking into quantum hybridization and decarbonization which will open many doors in the near future.

Up to this point, HPC environment has been quite conservative. I believe that administrators are starting to see the benefits of orchestration and micro service-based cluster management. There are some obstacles, but I do see more merits than issues in containerizing and orchestrating HPC workloads. There are some rising technological barriers that may push our industry in a corner, while at the same time giving us opportunities to change the way we architect our systems.

High performance low latency networks are making massive use of copper cables. With higher data rates (400Gb/s in 2022 and 800Gb/s in 2025) the workable copper cable length will be divided by 4x, replaced by active or fiber cables with cabling costs certainly increasing by 5 or 6x. This is clearly an obstacle to systems that are going to range in the 25,000 endpoints, with a cabling budget in tens of millions.

This very simple problem may impose a paradigm shift in the way devices, from a general standpoint, are connected and communicate together. This triggers deeper architectural design points changes from racks to nodes and down to elements that are deeply integrated today such as compute cores, buses, memory and associated controllers, and switches. I wont say the 800Gb/s step alone will change everything, but the maturity of some technologies, such as silicon photonics and the emerging standardization on very powerful protocols like CXL, will enable a lot more flexibility while continuing to push the limits. Also, note that CXL is just in its infancy, but already shows promise for a memory coherent space between heterogenous devices, centralized or distributed, mono or multi-tenant memory pools.

Silicon photonic integrated circuits (PICs), because they offer theoretically Tb/s bandwidth through native fiber connection, should allow a real disaggregation between devices that are today very tightly connected together on more complex and more expensive than ever PCBs.

What will be possible inside a node will be possible outside of it, blurring the traditional frontier between a node, a blade, a rack and a supercomputer, offering a world of possibilities and new architectures.

The market is probably not fully interested in finding an alternative to the ultra-dominance of the Linpack or its impact on how we imagine, engineer, size and deliver our supercomputers. Ultimately, how relevant is its associated ranking to real life problems? I wish we could initiate a trend that ranks global system efficiency versus available peak power. This would help HPC players to consider working on all optimization paths rather than piling more and more compute power.

Lastly, I am concerned by the fact that almost nothing has changed in the last 30 years in how applications are interacting with data. Well, HPC certainly uses faster devices. We now have clustered shared file systems like Lustre. Also, we have invented object-oriented key and value abstractions, but in reality storage subsystems are most of the time centralized. They are connected on the high-speed fabric. They are also oversized to absorb checkpoints from an ever-growing node count, while in nominal regime they only use a portion of the available bandwidth. Ultimately with workloads, by nature spread across all fabric, most of the power consumption comes from IOs.

However, its time to change this situation. There are some possible avenues, and they will improve as a side effect, the global efficiency of HPC workloads, hence the sustainability and the value of HPC solutions.

More generally, what excites you about working in high-performance computing?

Ive always loved to learn and be intellectually stimulated, especially in my career environment. High performance computing, along with AI and now quantum, are giving me constant food for thoughts and options to solve big problems than I will ever been able to absorb.

I appreciate pushing the limits every day, driving the Atos portfolio and setting the directions, ultimately helping our customers to solve their toughest problems. This is really rewarding for me and our Atos team. Im never satisfied, but Im very proud of what we have achieved together, bringing Atos into the top four ranking worldwide in supercomputers.

What led you to pursue a career in the computing field and what are your suggestions for engaging the next generation of IT professionals?

Ive always been interested by technology, initially attracted by everything that either flew or sailed. Really, Im summarizing this into everything that plays with wind. In my teenage years, after experiencing sailboards and gliders, I was fortunate enough to have access to my first computer in late 1979 when I was 16. My field of vision prevented me from being a commercial pilot, thus I started pursuing a software engineering master degree that led me into the information technology world.

When I began my career in IT, I was not planning any specific path to a specific domain. I simply took all opportunities to learn a new domain, work hard to succeed, and jump to something new that excited me. In my first position, I was lucky enough to work on an IBM mainframe doing CAD with some software development, as well as embracing a fully unknown system engineering role that I had to learn from scratch. Very educational! I jumped from developing in Fortran and doing system engineering on VM/SP and Unix. Then I learned Oracle RDMBS and Internet at Intergraph, HPC servers and storage at SGI. I pursued my own startups, and now Im leading the HPC, AI and quantum portfolio at Atos.

What I would tell the next generation of IT professional for their career is to:

First, only take roles in which you will learn new things. It could be managerial, financial, technical it doesnt matter. To evolve in your future career, the more diverse experience you have, the better you will be able to react and be effective. Move to another role when you are not learning anymore or if you are far too long in your comfort zone.

Second, look at problems to solve, think out of the box and with a 360-degree vision. Break the barriers, and change the angle of view to give new perspectives and solutions to your management and customers.

Also, compensation is important, but its not all. What you will do, how it will make you happy in your life, and what you will achieve professionally is more important. Ultimately, compare your salary with the free time that remains to spend it with your family and friends. Lastly, compensation is not always an indicator of success, but rather changing the world for the better and making our planet a better place to live is the most important benefit you will find in high performance computing.

Outside of the professional sphere, what can you tell us about yourself family stories, unique hobbies, favorite places, etc.? Is there anything about you your colleagues might be surprised to learn?

Together with my wife, we are the proud parents of two beautiful adult daughters. Also we have our three-year-old, bombshell Jack Russell named Pepsy, who brings a lot of energy to our house.

We live Northwest of Paris in a small city on the Seine river. Im still a private pilot and still cruising sail boats with family and friends. I recently participated in the ARC 2021 transatlantic race with three friends on a trimaran boat a real challenge and a great experience. Soon, were off to visiting Scotland for a family vacation!

Eppe is one of 12 HPCwire People to Watch for 2022. You can read the interviews with the other honorees at this link.

See the article here:
Q&A with Atos' Eric Eppe, an HPCwire Person to Watch in 2022 - HPCwire