Archive for the ‘Quantum Computer’ Category

The Hyperion-insideHPC Interviews: Dr. Michael Resch Talks about the Leap from von Neumann: ‘I Tell My PhD Candidates: Go for Quantum’ – insideHPC

Dr. Michael M. Resch of the University of Stuttgart has professorships, degrees, doctorates and honorary doctorates from around the world, he has studied and taught in Europe and the U.S., but for all the work he has done in supercomputing for the past three-plus decades, he boils down his years in HPC to working with the same, if always improving, von Neumann architecture. Hes eager for the next new thing: quantum. Going to quantum computing, we have to throw away everything and we have to start anew, he says. This is a great time.

In This Update. From The HPC User Forum Steering Committee

By Steve Conway and Thomas Gerard

After the global pandemic forced Hyperion Research to cancel the April 2020 HPC User Forum planned for Princeton, New Jersey, we decided to reach out to the HPC community in another way by publishing a series of interviews with members of the HPC User Forum Steering Committee. Our hope is that these seasoned leaders perspectives on HPCs past, present and future will be interesting and beneficial to others. To conduct the interviews, Hyperion Research engaged insideHPC Media.

We welcome comments and questions addressed to Steve Conway, sconway@hyperionres.com or Earl Joseph, ejoseph@hyperionres.com.

This interview is with Michael M. Resch. Prof. Dr. Dr. h.c. mult. He is dean of the faculty for energy-process and biotechnology of the University of Stuttgart, director of the High Performance Computing Center Stuttgart (HLRS), the Department for High Performance Computing, and the Information Center (IZUS), all at the University of Stuttgart, Germany. He was an invited plenary speaker at SC07. He chairs the board of the German Gauss Center for Supercomputing (GCS) and serves on the advisory councils for Triangle Venture Capital Group and several foundations. He is on the advisory board of the Paderborn Center for Parallel Computing (PC2). He holds a degree in technical mathematics from the Technical University of Graz, Austria and a Ph.D. in engineering from the University of Stuttgart. He was an assistant professor of computer science at the University of Houston and was awarded honorary doctorates by the National Technical University of Donezk (Ukraine) and the Russian Academy of Science.

He was interviewed by Dan Olds, HPC and big data consultant at Orionx.net.

The HPC User Forum was established in 1999 to promote the health of the global HPC industry and address issues of common concern to users. More than 75 HPC User Forum meetings have been held in the Americas, Europe and the Asia-Pacific region since the organizations founding in 2000.

Olds: Hello, Im Dan Olds on behalf of Hyperion Research and insideHPC, and today Im talking to Michael Resch, who is an honorable professor at the HPC Center in Stuttgart, Germany. How are you, Michael?

Resch: I am fine, Dan. Thanks.

Olds: Very nice to talk to you. I guess lets start at the beginning. How did you get involved in HPC in the first place?

Resch: That started when I was a math student and I was invited to work as a student research assistant and, by accident, that was roughly the month when a new supercomputer was coming into the Technical University of Graz. So, I put my hands on that machine and I never went away again.

Olds: You sort of made that machine yours, I guess?

Resch: We were only three users. There were three user groups and I was the most important user of my user group because I did all the programming.

Olds: Fantastic, thats a way to make yourself indispensable, isnt it?

Resch: In a sense.

Olds: So, can you kind of summarize your HPC background over the years?

Resch: I started doing blood flow simulations, so I at first looked into this very traditional Navier-Stokes equation that was driving HPC for a long time. Then I moved on to groundwater flow simulations pollution of groundwater, tunnel construction work, and everything until after like five years I moved to the University of Stuttgart, where I started to work with supercomputers, more focusing on the programming side, the performance side, than on the hardware side. This is sort of my background in terms of experience.

In terms of education, I studied a mixture of mathematics, computer science and economics, and then did a Ph.D. in engineering, which was convenient if youre working in Navier-Stokes equations. So, I try to bring all of these things together to make an impact in HPC.

Olds: What are some of the biggest changes youve seen in HPC over your career?

Resch: Well, the biggest change is probably that when I started, as I said, there were three user groups. These were outstanding experts in their field, but supercomputing was nothing for the rest of the university. Today, everybody is using HPC. Thats probably the biggest change, that we are moving from something where you had one big system and a few experts around that system, and you moved to a larger number of systems and tens of thousands of experts working with them.

Olds: And, so, the systems have to get bigger, of course.

Resch: Well, certainly, they have to get bigger. And they have to get, I would say, more usable. Thats another feature, that now things are more hidden from the user, which makes it easier to use them. But at the same time, it takes away some of the performance. There is this combination of hiding things away from the user and then the massive parallelism that we saw, and thats the second most important thing that I think we saw in the last three decades. That has made it much more difficult to get high sustained performance.

Olds: Where do you see HPC headed in the future? Is there anything that has you particularly excited or concerned?

Resch: [Laughs] Im always excited and concerned. Thats just normal. Thats what happens when you go into science and thats normal when you work with supercomputers. I see, basically, two things happening. The first thing is that people will merge everything that has to do with data and everything that has to do with simulation. I keep saying its data analytics, machine learning, artificial intelligence. Its sort of a development from raw data to very intelligent handling of data. And these data-intensive things start to merge with simulation, like we see people trying to understand what they did over the last 20 years by employing artificial intelligence to work its way through the data trying to find what we have already done and what should we do next, things like that.

The second thing that is exciting is quantum computing. Its exciting because its out of the ordinary, in a sense. You might say that over the last 32 years the only thing I did was work with improved technology and improved methods and improved algorithms or whatever, but I was still working in the same John von Neumann architecture concept. Going to quantum computing we have to throw away everything and we have to start anew. This is a great time. I keep telling my Ph.D. candidates, go for quantum computing. This is where you make an impact. This is where you have a wide-open field of things you can explore and this is what is going to make the job exciting for the next 10, 12, 15 years or so.

Olds: Thats fantastic and your enthusiasm for this really comes through. Your enthusiasm for HPC, for the new computing methods, and all that. And, thank you so much for taking the time.

Resch: It was a pleasure. Thank you.

Olds: Thank you, really appreciate it.

Originally posted here:
The Hyperion-insideHPC Interviews: Dr. Michael Resch Talks about the Leap from von Neumann: 'I Tell My PhD Candidates: Go for Quantum' - insideHPC

Microsoft Executive Vice President Jason Zander: Digital Transformation Accelerating Across the Energy Spectrum; Being Carbon Negative by 2030; The…

WASHINGTON--(BUSINESS WIRE)--Microsoft Executive Vice President Jason Zander says the company has never been more busy partnering with the energy industry on cloud technologies and energy transition; the combination of COVID-19 and the oil market shock has condensed years of digital transformation into a two-month period; the companys return to its innovative roots and its goal to have removed all of the companys historic carbon emissions by 2050 in the latest edition of CERAWeek Conversations.

In a conversation with IHS Markit (NYSE: INFO) Vice Chairman Daniel Yergin, Zanderwho leads the companys cloud services business, Microsoft Azurediscusses Microsofts rapid and massive deployment of cloud-based apps that have powered work and commerce in the COVID-19 economy; how cloud technologies are optimizing business and vaccine research; the next frontiers of quantum computing and its potential to take problems that would take, literally, a thousand years, you might be able to solve in 10 seconds, and more.

The complete video is available at: http://www.ceraweek.com/conversations

Selected excerpts:Interview Recorded Thursday, July 16, 2020

(Edited slightly for brevity only)

Watch the complete video at: http://www.ceraweek.com/conversations

Weve already prepositioned in over 60 regions around the world hundreds of data center, millions and millions of server nodestheyre already there. If you can imagine COVID, if you had to go back and do a procurement exercise and figure out a place to put the equipment, and the supply chains were actually shut down for a while because of COVID. Thats why I say, even three to five years ago we as industries would have been pretty challenged to respond as quickly as we had.

Thats on the more tactical end of the spectrum. On the other end weve also done a lot of things around data sets and advanced data work. How do we find a cure? Weve done things like [protein] folding at home and making sure that those things could be hosted on the cloud. These are thingsthat will be used in the search of a vaccine for the virus. Those are wildly different spectrums from the tactical 'we need to manage and do logistics' to 'we need a search for things that are going to get us all back to basically normal.'

Theres also a whole bunch of stimulus packages and payment systems that are getting created and deployed. Weve had financial services companies that run on top of the cloud. They may have been doing a couple of hundred big transactions a day; weve had them do tens to hundreds of thousands a day when some of this kicked in.

The point is with the cloud I can just go to the cloud, provision it, use it, and eventually when things cool back down, I can just shut it off. I dont have to worry about having bought servers, find a place for them to live, hiring people to take care of them.

There was disruption in supply chain also. Many of us saw this at least in the Statesif you think even the food supply chain, every once in a while, youd see some hiccups. Theres a whole bunch of additional work that weve done around how do we do even better planning around that, making sure we can hit the right levels of scale in the future? God forbid we should have another one of these, but I think we can and should be responsible to make sure that weve got it figured out.

The policy and investment sideit has never been more important for us to collaborate with healthcare, universities, and with others. Weve kicked off a whole bunch of new partnerships and work that will benefit us in the future. This was a good wake up call for all of us in figuring out how to marshal and be able to respond even better in the future.

Weve had a lot of cases where people have been moving out of their own data centers and into ours. Let us basically take care of that part of the system. We can run it cheaply and efficiently. Im seeing a huge amount of data center accelerationfolks that really want to move even faster on getting their workloads removed. Thats true for oil and gas but its also true for the financial sector and retail.

Specifically, for oil and gas, one of the things that were trying to do in particular is bring this kind of cloud efficiency, this kind of AI, and especially help out with places where you are doing exploration. What these have in common is the ability to take software especially from the [independent software vendors] that work in the spacereservoir simulation, explorationand marry that to these cloud resources where I can spin things up and spin things down. I can take advantage of that technology that Ive got, and I am more efficient. I am not spending capex; I can perhaps do even more jobs than I was doing before. That allows me to go do that scale. If youre going to have less resources to do something, you of course want to increase your hit rate; increase your efficiency. Those are some of the core things that were seeing.

A lot of folks, especially in oil and gas, have some of the most sophisticated high-performance computing solutions that are out there today. What we want to be able to do with the cloud is to be able to enable you to do even more of those solutions in a much more efficient way. Weve got cases where people have been able to go from running one reservoir simulation job a day on premises [to] where they can actually go off to the cloud and since we have all of this scale and all of this equipment, you can spin up and do 100 in one day. If that is going to be part of how you drive your efficiency, then being able to subscribe to that and go up and down its helping you do that job much more efficiently than you used to and giving you a lot more flexibility.

Were investing in a $1 billion fund over the next four years for carbon removal technology. We also are announcing a Microsoft sustainability calculator for cloud customers. Basically, you can help get transparency into your Scope 1,2, and 3 carbon emissions to get control. You can think of us as we want to hit this goal, we want to do it ourselves, we want to figure out how we build technology to help us do that and then we want to share that technology with others. And then all along the way we want to partner with energy companies so that we can all be partnering together on this energy transition.

From a corporate perspective weve made pledges around being carbon negative, but then also working with our energy partners. The way that we look at this is youre going to have continued your requirements and improvements in standards of living around the entire planet. One of the core, critical aspects to that is energy. The world needs more energy, not less. There are absolutely the existing systems that we have out there that we need to continue to improve, but they are also a core part of how things operate.

What we want to do is have a very responsible program where were doing things like figuring out how to go carbon negative and figuring out ways that we as a company can go carbon negative. At the same time, taking those same techniques and allowing others to do the same and then partnering with energy companies around energy transformation. We still want the investments in renewables. We want to figure out how to be more efficient at the last mile when we think about the grid. I generally find that when you get that comprehensive answer back to our employees, they understand what we are doing and are generally supportive.

Coming up is a digital feedback loop where you get enough data thats coming through the system that you can actually start to be making smart decisions. Our expectation is well have an entire connected environment. Now we start thinking about smart cities, smart factories, hospitals, campuses, etc. Imagine having all of that level of data thats coming through and the ability to do smart work shedding or shaping of electrical usage, things where I can actually control brownout conditions and other things based on energy usage. Theres also the opportunity to be doing smart sharing of systems where we can do very efficient usage systemsintelligent edge and edge deployments are a core part of that.

How do we keep all the actual equipment that people are using safe? If you think about 5G and additional connectivity, were getting all this cool new technology thats there. You have to figure out a way in which youre leveraging silicon, youre leveraging software and the best in securityand were investing in all three.

The idea of being able to harness particle physics to do computing and be able to figure out things in minutes that would literally take centuries to go pull off otherwise in classical computing is kind of mind-blowing. Were actually working with a lot of the energy companies on figuring out how could quantum inspired algorithms make them more efficient today. As we get to full scale quantum computing then they would run natively in hardware and would be able to do even more amazing things. That one has just the potential to really, really change the world.

The meta point is problems that would take, literally, a thousand years, you might be able to solve in 10 seconds. Weve proven how that kind of technology can work. The quantum-inspired algorithms therefore allow us to take those same kind of techniques, but we can run them on the cloud today using some of the classic cloud computers that are there. Instead of taking 1,000 years, maybe its something that we can get done in 10 days, but in the future 10 seconds.

About CERAWeek Conversations:

CERAWeek Conversations features original interviews and discussion with energy industry leaders, government officials and policymakers, leaders from the technology, financial and industrial communitiesand energy technology innovators.

The series is produced by the team responsible for the worlds preeminent energy conference, CERAWeek by IHS Markit.

New installments will be added weekly at http://www.ceraweek.com/conversations.

Recent segments also include:

A complete video library is available at http://www.ceraweek.com/conversations.

About IHS Markit (www.ihsmarkit.com)

IHS Markit (NYSE: INFO) is a world leader in critical information, analytics and solutions for the major industries and markets that drive economies worldwide. The company delivers next-generation information, analytics and solutions to customers in business, finance and government, improving their operational efficiency and providing deep insights that lead to well-informed, confident decisions. IHS Markit has more than 50,000 business and government customers, including 80 percent of the Fortune Global 500 and the worlds leading financial institutions. Headquartered in London, IHS Markit is committed to sustainable, profitable growth.

IHS Markit is a registered trademark of IHS Markit Ltd. and/or its affiliates. All other company and product names may be trademarks of their respective owners 2020 IHS Markit Ltd. All rights reserved.

Read more:
Microsoft Executive Vice President Jason Zander: Digital Transformation Accelerating Across the Energy Spectrum; Being Carbon Negative by 2030; The...

What is the Turing Test and Why Does it Matter? – Unite.AI

The field of data science seems to just get bigger and more popular everyday. According to LinkedIn, data science was one of the fastest-growing job fields in 2017 and in 2020 Glassdoor ranked the job of data science as one of the three best jobs within the United States. Given the growing popularity of data science, its no surprise that more people are getting interested in the field. Yet what is data science exactly?

Lets get acquainted with data science, taking some time to define data science, explore how big data and artificial intelligence is changing the field, learn about some common data science tools, and examine some examples of data science.

Before we can explore any data science tools or examples, well want to get a concise definition of data science.

Defining data science is actually a little tricky, because the term is applied to many different tasks and methods of inquiry and analysis. We can begin by reminding ourselves of what the term science means. Science is the systematic study of the physical and natural world through observation and experimentation, aiming to advance human understanding of natural processes. The important words in that definition are observation and understanding.

If data science is the process of understanding the world from patterns in data, then the responsibility of a data scientist is to transform data, analyze data, and extract patterns from data. In other words, a data scientist is provided with data and they use a number of different tools and techniques to preprocess the data (get it ready for analysis) and then analyze the data for meaningful patterns.

The role of a data scientist is similar to the role of a traditional scientist. Both are concerned with the analysis of data to support or reject hypotheses about how the world operates, trying to make sense of patterns in the data to improve our understanding of the world. Data scientists make use of the same scientific methods that a traditional scientist does. A data scientist starts by gathering observations about some phenomena they would like to study. They then formulate a hypothesis about the phenomenon in question and try to find data that nullifies their hypothesis in some way.

If the hypothesis isnt contradicted by the data, they might be able to construct a theory, or model, about how the phenomenon works, which they can go on to test again and again by seeing if it holds true for other similar datasets. If a model is sufficiently robust, if it explains patterns well and isnt nullified during other tests, it can even be used to predict future occurrences of that phenomenon.

A data scientist typically wont gather their own data through an experiment. They usually wont design experiments with controls and double-blind trials to discover confounding variables that might interfere with a hypothesis. Most data analyzed by a data scientist will be data gained through observational studies and systems, which is a way in which the job of a data scientist might differ from the job of a traditional scientist, who tends to perform more experiments.

That said, a data scientist might be called on to do a form of experimentation called A/B testing where tweaks are made to a system that gathers data to see how the data patterns change.

Regardless of the techniques and tools used, data science ultimately aims to improve our understanding of the world by making sense out of data, and data is gained through observation and experimentation. Data science is the process of using algorithms, statistical principles, and various tools and machines to draw insights out of data, insights that help us understand patterns in the world around us.

You might be seeing that any activity that involves the analysis of data in a scientific manner can be called data science, which is part of what makes defining data science so hard. To make it more clear, lets explore some of the activities that a data scientist might do on a daily basis.

Data science brings many different disciplines and specialties together. Photo: Calvin Andrus via Wikimeedia Commons, CC BY SA 3.0 (https://commons.wikimedia.org/wiki/File:DataScienceDisciplines.png)

On any given day, a data scientist might be asked to: create data storage and retrieval schema, create data ETL (extract, transform, load) pipelines and clean up data, employ statistical methods, craft data visualizations and dashboards, implement artificial intelligence and machine learning algorithms, make recommendations for actions based on the data.

Lets break the tasks listed above down a little.

A data scientist may be required to handle the installation of technologies needed to store and retrieve data, paying attention to both hardware and software. The person responsible for this position may also be referred to as Data Engineer. However, some companies include these responsibilities under the role of data scientists. A data scientist may also need to create, or assist in the creation of, ETL pipelines. Data very rarely comes formatted just as a data scientist needs. Instead, the data will need to be received in a raw form from the data source, transformed into a usable format, and preprocessed (things like standardizing the data, dropping redundancies, and removing corrupted data).

The application of statistics is necessary to turn simply looking at data and interpreting it into an actual science. Statistical methods are used to extract relevant patterns from datasets, and a data scientist needs to be well versed in statistical concepts. They need to be able to discern meaningful correlations from spurious correlations by controlling for confounding variables. They also need to know the right tools to use to determine which features in the dataset are important to their model/have predictive power. A data scientist needs to know when to use a regression approach vs. a classification approach, and when to care about the mean of a sample vs. the median of a sample. A data scientist just wouldnt be a scientist without these crucial skills.

A crucial part of a data scientists job is communicating their findings to others. If a data scientist cant effectively communicate their findings to others, than the implications of their findings dont matter. A data scientist should be an effective story-teller as well. This means producing visualizations that communicate relevant points about the dataset and the patterns discovered within it. There is a large number of different data visualization tools that a data scientist might use, and they may visualize data for the purposes of initial, basic exploration (exploratory data analysis) or visualize the results that a model produces.

A data scientist needs to have some intuition of the requirements and goals of their organization or business. A data scientist needs to understand these things because they need to know what types of variables and features they should be analyzing, exploring patterns that will help their organization achieve its goals. The data scientists need to be aware of the constraints that they are operating under and the assumptions that the organizations leadership are making.

Machine learning and other artificial intelligence algorithms and models are tools used by data scientists to analyze data, identify patterns within data, discern relationships between variables, and make predictions about future events.

As data collection methods have gotten more sophisticated and databases larger, a difference has arisen between traditional data science and big data science.

Traditional data analytics and data science is done with descriptive and exploratory analytics, aiming to find patterns and analyze the performance results of projects. Traditional data analytics methods often focus on just past data and current data. Data analysts often deal with data that has already been cleaned and standardized, while data scientists often deal with complex and dirty data. More advanced data analytics and data science techniques might be used to predict future behavior, although this is more often done with big data, as predictive models often need large amounts of data to be reliably constructed.

Big data refers to data that is too large and complex to be handled with traditional data analytics and science techniques and tools. Big data is often collected through online platforms and advanced data transformation tools are used to make the large volumes of data ready for inspection by data science. As more data is collected all the time, more of a data scientists job involves the analysis of big data.

Common data science tools include tools to store data, carry out exploratory data analysis, model data, carry out ETL, and visualize data. Platforms like Amazon Web Services, Microsoft Azure, and Google Cloud all offer tools to help data scientists store, transform, analyze, and model data. There are also standalone data science tools like Airflow (data infrastructure) and Tableau (data visualization and analytics).

In terms of machine learning and artificial intelligence algorithms used to model data, they are often provided through data science modules and platforms like TensorFlow, PyTorch, and the Azure Machine-learning studio. These platforms like data scientists make edits to their datasets, compose machine learning architectures, and train machine learning models.

Other common data science tools and libraries include SAS (for statistical modeling), Apache Spark (for the analysis of streaming data), D3.js (for interactive visualizations in the browser), and Jupyter (for interactive, sharable code blocks and visualizations).

Photo: Seonjae Jo via Flickr, CC BY SA 2.0 (https://www.flickr.com/photos/130860834@N02/19786840570)

Examples of data science and its applications are everywhere. Data science has applications in everything from food delivery, sports, traffic, and health. Data is everywhere and so data science can be applied to everything.

In terms of food, Uber is investing in an expansion to its ride-sharing system focused on the delivery of food, Uber Eats. Uber Eats needs to get people their food in a timely fashion, while it is still hot and fresh. In order for this to occur, data scientists for the company need to use statistical modeling that takes into account aspects like distance from restaurants to delivery points, holiday rushes, cooking time, and even weather conditions, all considered with the goal of optimizing delivery times.

Sports statistics are used by team managers to determine who the best players are and form strong, reliable teams that will win games. One notable example is the data science documented by Michael Lewis in the book Moneyball, where the general manager of the Oakland Athletics team analyzed a variety of statistics to identify quality players that could be signed to the team at relatively low cost.

The analysis of traffic patterns is critical for the creation of self-driving vehicles. Self-driving vehicles must be able to predict the activity around them and respond to changes in road conditions, like the increased stopping distance required when it is raining, as well as the presence of more cars on the road during rush hour. Beyond self-driving vehicles, apps like Google Maps analyze traffic patterns to tell commuters how long it will take them to get to their destination using various routes and forms of transportation.

In terms of health data science, computer vision is often combined with machine learning and other AI techniques to create image classifiers capable of examining things like X-rays, FMRIs, and ultrasounds to see if there are any potential medical issues that might show up in the scan. These algorithms can be used to help clinicians diagnose disease.

Ultimately, data science covers numerous activities and brings together aspects of different disciplines. However, data science is always concerned with telling compelling, interesting stories from data, and with using data to better understand the world.

Read the original:
What is the Turing Test and Why Does it Matter? - Unite.AI

The Power of Epigenetics in Human Reproduction – Newswise

Newswise Addressing the mystery of how reproduction is shaped by childhood events and environment, Professor Philippa Melamed, together with PhD student Ben Bar-Sadeh, Postdoctorate Dr. Sergei Rudnizky, and colleagues Dr. Lilach Pnueli and Professor Ariel Kaplan, all from the Technion Faculty of Biology, and collaborators from the UK, Professor Gillian R. Bentley from Durham University and Professor Reinhard Stger from the University of Nottingham, have just published a paper in Nature Reviews Endocrinology on the role of epigenetics in human reproduction.

Epigenetics refers to the packaging of DNA, which can be altered in response to external signals (environment) through the addition of chemical tags to the DNA or the histone proteins that organize and compact the DNA inside the cell. This packaging affects the ability of a gene to be accessed and thus also its expression levels. So environmentally induced changes in this epigenetic packaging can lead to major variations in the phenotype (observable characteristics or traits) without changing the genetic code. This re-programming of gene expression patterns underlies some of our ability to adapt.

Reproductive characteristics are highly variable and responsive particularly to early life environment, during which they appear to be programmed to optimize an individuals reproductive success in accordance with the surroundings. Although some of these adaptations can be beneficial, they also carry negative health consequences that may be far-reaching. These include the age of pubertal onset and duration of the reproductive lifespan for women, and also the levels of circulating reproductive hormones; not only is fertility affected, but also predisposition to hormone-dependent cancers and other age-related diseases.

While epigenetic modifications are believed to play a role in the plasticity of reproductive traits, the actual mechanisms are mostly still not clear. Moreover, reproductive hormones also modify the epigenome and epigenetic aging, which complicates distinguishing cause from effect, particularly when trying to understand human reproductive phenotypes in which the relevant tissues are inaccessible for analysis. Integrated studies are needed, including observations and whatever measurements are possible in human populations, incorporation of animal models, cell culture, and even single-molecule studies, in order to determine the mechanisms responsible for the human reproductive phenotype.

The review emphasizes that there is a clinical need to understand the characteristics of epigenetic regulation of reproductive function and the underlying mechanisms of adaptive responses for properly informed decisions on treating patients from diverse backgrounds. In addition, this knowledge should form the basis for formulating lifestyle recommendations and novel treatments that utilize the epigenetic pathway to alter a reproductive phenotype.

Prof. Melamed emphasizes that a multifaceted cross-disciplinary approach is essential for elucidating the involvement of epigenetics in human reproductive function, spanning the grand scale of human cohort big data and anthropological studies in unique human populations, through animal models and cell culture experiments, to the exquisitely high resolution of single-molecule biophysical approaches. This will continue to require collaboration and cooperation.

For more than a century, the Technion - Israel Institute of Technology has pioneered in science and technology education and delivered world-changing impact. Proudly a global university, the Technion has long leveraged boundary-crossing collaborations to advance breakthrough research and technologies. Now with a presence in three countries, the Technion will prepare the next generation of global innovators. Technion people, ideas and inventions make immeasurable contributions to the world, innovating in fields from cancer research and sustainable energy to quantum computing and computer science to do good around the world.

The American Technion Society supports visionary education and world-changing impact through the Technion - Israel Institute of Technology. Based in New York City, we represent thousands of US donors, alumni and stakeholders who invest in the Technions growth and innovation to advance critical research and technologies that serve the State of Israel and the global good. Over more than 75 years, our nationwide supporter network has funded new Technion scholarships, research, labs, and facilities that have helped deliver world-changing contributions and extend Technion education to campuses in three countries.

Original post:
The Power of Epigenetics in Human Reproduction - Newswise

The Honeywell transition: From quantum computing to making masks – WRAL Tech Wire

CHARLOTTE Honeywell no longer sells its iconic home thermostats, but its still in the business of making control systems for buildings and aircraft.

Thats put the 114-year-old conglomerate in a tough spot as workplaces have gone vacant and flights grounded in response to the coronavirus pandemic.

Darius Adamczyk, who became CEO in 2017, spoke with The Associated Press about how the business is adjusting to the pandemic, diverting resources to build personal protective equipment and continuing a quest for a powerful quantum computer that works by trapping ions. The interview has been edited for length and clarity.

Q: How is the crisis affecting some of your your core business segments, especially aerospace?

A: The air transport segment obviously is impacted the most because its tied to air travel and production of new aircraft. Business aviation is depressed as well. The third segment, which has been fairly resilient, is defense and space. We expect to see growth in that segment even this year.

Q: Youve had to do layoffs?

A: Unfortunately, weve had to take some cost actions. Its a bit more drastic in aerospace and our (performance materials) business and much less so in some of the other businesses. Some of the actions weve taken have been to do temporary things. Weve created a $10 million dollar fund for employees who are financially impacted by COVID. We extended sick leave for a lot of our hourly employees. Taking care of our employees is the No. 1 priority and making sure that theyre healthy and safe, but also protecting the business long-term because the economic conditions are severe. Some of the levels of fall off here in Q2 are much more dramatic than we saw in the 2008/2009 recession.

Q: How did Honeywell get into building a quantum computer?

A: One of the bigger challenges in making a quantum computer work is the ability to really control the computer itself. The way we kind of came into this play is weve had the controls expertise, but we didnt have so much trapped ion expertise.

Q: How does your approach differ from from what Google and IBM have been trying to do?

A: I dont know exactly technically what theyre doing. Some of these things are very proprietary and very secret. But were very confident in terms of the public announcements and what weve been able to learn from some of the publicly available information that we, in fact, have the most powerful quantum computer in the world. Its going to get better and better by an order of magnitude every year.

Q: Howd you go about re-purposing factories in Rhode Island and Arizona to make respiratory masks?

A: We very quickly mobilized a couple of facilities that we werent fully utilizing. Something that would normally take us nine months took us literally four to five weeks to create. Weve gone from zero production to having two fully functioning facilities, making about 20 million masks a month.

Q: President Trump didnt wear a mask while visiting Honeywells Arizona factory in May. Did he talk to you about whether he should wear a mask?

A: No.

Q: What did he talk about?

A: He was very kind in his comments about the kind of contribution Honeywell has made, not just today, this crisis, but really in other times of crisis, such as in World War II and some of the other technologies that weve provided in the past. So I think it was certainly nice to hear.

Read the original:
The Honeywell transition: From quantum computing to making masks - WRAL Tech Wire