Archive for the ‘Machine Learning’ Category

The Wonders of Machine Learning: Tackling Lint Debug Quickly with … – Design and Reuse

Achieving predictable chip design closure has become progressively challenging for teams worldwide. While linting tools have existed for decades, traditional tools require significant effort to filter out the noise and eventually zero-in on the real design issues. With increasing application specific integrated circuit (ASIC) size and complexity, chip designers require higher debug efficiency when managing a large number of violations in order to ultimately achieve a shorter turnaround time (TAT).

In the first two parts of this linting series, we established how linting offers a comprehensive mechanism to check for fundamental chip design faults and touched on the many benefits of having a guiding methodology for deeper functional lint analysis.

Recognizing the disparities between in-house coding styles, our extensive experience of working with industry leaders has given us an edge to accelerate RTL and system-on-chip (SoC) design flow for customers previously unseen. Solutions such as Synopsys VC SpyGlass CDC have already proven how valuable advanced machine learning (ML) algorithms are to achieve SoC design signoff with scalable performance and high debug productivity. Leveraging industry-standard practices and our decades of expertise, the latest offering of Synopsys VC SpyGlass Lint now includes powerful ML capabilities to significantly improve debug efficiency for designers.

In the finale of this blog series, well cover the downside of traditional linting tools, how the functionality of ML and root-cause analysis (ML-RCA) accelerate design signoff, the key benefits of Synopsys VC SpyGlass Lint, and where we see the future of smart linting headed.

Click here to read more ...

Read more:
The Wonders of Machine Learning: Tackling Lint Debug Quickly with ... - Design and Reuse

Can Artificial Intelligence and Machine Learning Find Life in Space? – BBN Times

Artificial intelligence (AI) and machine learning (ML) are increasingly being used in the field of astrobiology to help in the search for life in space.

The latest advances in artificial intelligence and machine learning could accelerate the search for extraterrestrial lifeby showing the most promising places to look.

With the vastness of the universe, the search for life beyond Earth is a complex and challenging task. AI and ML have the potential to enhance our ability to detect signs of life and to identify the most promising targets for exploration.

The use of AI and ML in space applications have picked up pace as researchers and scientists worldwide deploy machine learning algorithms that analyze vast amounts of data and identify signals and potential targets in space.

The universe is a game of billions - being billions of years old, spanning across billions of light years and harboring billions of stars, galaxies, planets and unidentifiable elements. Amidst this, we are but a tiny speck of life living on the only identified habitable planet in space. Scientists, astronomers, astrologers and common people alike from all over the world have discussed the idea of extraterrestrial life prevailing in any corner of the universe. The likelihood of the existence of life beyond Earth is high, leading to various efforts being put into discovering traces of life through signals, observations, detections and more. And with AI and ML in space applications, detecting life in space has moved beyond just a dream and entered into its practical stages.

The termSETI, or Search for Extraterrestrial Intelligence, refers to the effort to find intelligent extraterrestrial life by searching the cosmos for signs of advanced civilizations. The theory underlying SETI is that there might be intelligent extraterrestrial civilizations out there and they might be sending out signals that we could pick up on. These signals could manifest as deliberate messages, unintended emissions from advanced technology or even proof of enormous engineering undertakings likeDyson spheres. SETIs role includes, but is not limited to:

To analyze the massive volumes of data gathered from radio telescopes and other sensors used in the hunt for extraterrestrial intelligence, SETI researchers employmachine learningtechniques. ML can be used to help analyze data from other instruments, such as optical telescopes, that may be used in the search for extraterrestrial intelligence. For example, machine learning algorithms can be trained to recognize patterns in the light curves of stars that may indicate the presence of advanced technology.

The identification of signals that might be an indication of extraterrestrial intelligence is one of the ways SETI makes use of machine learning. Both natural signals, such as those produced by pulsars and artificial signals, such as those from satellites and mobile phones, can be collected by radio telescopes. The properties of these various signals can be used to train machine learning algorithms to identify them and separate them from potential signals from extraterrestrial intelligence.

A further application of ML in SETI is to assist in locating and categorizing possible targets for further observations. With so much information to sort through, it can be challenging for human researchers to decide which signals are most intriguing and deserving of additional study. Based on criteria like signal strength, frequency and duration, machine learning algorithms can be used to automatically select possible targets.

While artificial intelligence and machine learning in space applications have shown significant promise in the study of astrobiology, finding extraterrestrial life is a complex and ongoing endeavor that requires many different approaches and technologies. Ultimately, it is only through collaborative efforts of scientific ingenuity and technological innovations that will allow us to find life beyond our planet.

Read more here:
Can Artificial Intelligence and Machine Learning Find Life in Space? - BBN Times

Iconic image of M87 black hole just got a machine-learning makeover – Ars Technica

Enlarge / This new, sharper image of the M87 supermassive black hole was generated by the PRIMO algorithm using 2017 EHT data.

Medeiros et al. 2023

The iconic image of a supermassive black hole in the Messier 87 (M87) galaxydescribed by astronomers as a "fuzzy orange donut"was a stunning testament to the capabilities of the Event Horizon Telescope (EHT). But there were still gaps in the observational data, limiting the resolution the EHT was able to achieve. Now four members of the EHT collaboration have applied a new machine-learning technique dubbed PRIMO (principal-component interferometric modeling) to the original 2017 data, giving that famous image its first makeover. They described their achievement in a new paper published in The Astrophysical Journal Letters.

PRIMO is a new approach to the difficult task of constructing images from EHT observations, said co-author Tod Lauer (NOIRLab). It provides a way to compensate for the missing information about the object being observed, which is required to generate the image that would have been seen using a single gigantic radio telescope the size of the Earth.

As we've reported previously, the EHT isn't a telescope in the traditional sense. Instead, it's a collection of telescopes scattered around the globe, including hardware from Hawaii to Europe, and from the South Pole to Greenland, though not all of these were active during the initial observations. The telescope is created by a process called interferometry, which uses light captured at different locations to build an image with a resolution that is the equivalent of a giant telescope (a telescope so big, its as if it were as large as the distance between the most distant locations of the individual telescopes).

Back in 2019,the EHT made headlines with its announcement of the first direct image of a black hole, located in the constellation of Virgo, some 55 million light years away. It was a feat that would have been impossible a mere generation ago, made possible by technological breakthroughs, innovative new algorithms, and of course, connecting several of the world's best radio observatories. Science magazine named the image its Breakthrough of the Year.

The EHT captured photons trapped in orbit around the black hole, swirling around at near the speed of light, creating a bright ring around it. From this, astronomers deduced that the black hole is spinning clockwise. The imaging also revealed the shadow of the black hole, a dark central region within the ring. That shadow is as close as astronomers can get to taking a picture of the actual black hole, from which light cannot escape once it crosses the event horizon. And just as the size of the event horizon is proportional to the black hole's mass, so, too, is the black hole's shadow: the more massive the black hole, the larger the shadow. It was a stunning confirmation of the general theory of relativity, showing that those predictions hold up even in extreme gravitational environments.

Medeiros et al. 2023

Two years later, the EHT releaseda new image of the same black hole, this time showing how it looked in polarized light. The ability to measure that polarization for the first timea signature of magnetic fields at the black hole's edgeyielded fresh insight into how black holes gobble up matter and emit powerful jets from their cores. That polarization enabled astronomers to map the magnetic field lines at the inner edge and to study the interaction between matter flowing in and being blown outward.

And now PRIMO has given astronomers an even sharper look at M87's supermassive black hole. We are using physics to fill in regions of missing data in a way that has never been done before by using machine learning, co-author Lia Medeiros of the Institute for Advanced Study said. This could have important implications for interferometry, which plays a role in fields from exo-planets to medicine.

PRIMO relies upon so-called dictionary learning, in which a computer learns to identify whether an unknown image is, for example, that of a banana, after being trained on large sets of different images of bananas. In the case of M87*, PRIMO analyzed over 30,000 simulated images of black holes accreting gas, taking into account many different models for how this accretion of matter occurs. Structural patterns were sorted by how frequently they showed up in the simulations, and PRIMO then blended them to produce a new, high-fidelity image of the black hole.

Overview of simulations generated for the training set of the PRIMO algorithm. Credit: Medeiros et al. 2023

The new image shows the central large dark region in greater detail, while the surrounding cloud of accreting gas is attenuated to reserve a "skinny donut." Per the authors, the image is consistent with both the 2017 EHT data and with theoretical predictions, most notably the bright rings that result from hot gas falling into the black hole. The higher resolution will help astronomers more accurately peg the mass of the black hole, as well as tighten constraints on alternative models for the event horizon, and enable more robust tests of gravity.

With our new machine-learning technique, PRIMO, we were able to achieve the maximum resolution of the current array, Medeirossaid. Since we cannot study black holes up-close, the detail of an image plays a critical role in our ability to understand its behavior. The width of the ring in the image is now smaller by about a factor of two, which will be a powerful constraint for our theoretical models and tests of gravity.

PRIMO should prove just as useful for other EHT observations, most notably the first image (releasedjust last year) of the black hole (Sagittarius A*) at the center of our own Milky Way galaxy. While M87's black hole was an easier, steadier target, with nearly all images looking the same, that was not the case for Sagittarius A*. The final image was an average of the different images from observational data that the team collected over multiple days. It took five years, multiple supercomputer simulations, and the development of new computational imaging algorithms capable of making inferences to fill in the blanks in the data. PRIMO could improve the resolution even further.

The 2019 image was just the beginning, said Medeiros. If a picture is worth a thousand words, the data underlying that image have many more stories to tell. PRIMO will continue to be a critical tool in extracting such insights.

DOI: The Astrophysical Journal Letters, 2023. 10.3847/2041-8213/acc32d (About DOIs).

See original here:
Iconic image of M87 black hole just got a machine-learning makeover - Ars Technica

The first black hole portrait got sharper thanks to machine learning – Science News Magazine

If the first image of a black hole looked like a fuzzy doughnut, this one is a thin onion ring.

Using a machine learning technique, scientists have sharpened the portrait of the supermassive black hole at the center of galaxy M87, revealing a thinner halo of glowing gas than seen previously.

In 2019, scientists with the Event Horizon Telescope unveiled an image of M87s black hole (SN: 4/10/19). The picture was the first ever taken of a black hole and showed a blurry orange ring of swirling gas silhouetted by the dark behemoth. The new rings thickness is half that of the original, despite being based on the same data, researchers report April 13 in the Astrophysical Journal Letters.

The Event Horizon Telescope takes data using a network of telescopes across the globe. But that technique leaves holes in the data. Since we cant just cover the entire Earth in telescopes, what that means is that there is some missing information, says astrophysicist Lia Medeiros of the Institute for Advanced Study in Princeton, N.J. We need to have an algorithm that can fill in those gaps.

Previous analyses had used certain assumptions to fill in those gaps, such as preferring an image that is smooth. But the new technique uses machine learning to fill in those gaps based on over 30,000 simulated images of matter swirling around a black hole, creating a sharper image.

In the future, this technique could help scientists get a better handle on the black holes mass and perform improved tests of gravity and other studies of black hole physics.

Questions or comments on this article? E-mail us atfeedback@sciencenews.org | Reprints FAQ

Physics writer Emily Conover has a Ph.D. in physics from the University of Chicago. She is a two-time winner of the D.C. Science Writers Association Newsbrief award.

Our mission is to provide accurate, engaging news of science to the public. That mission has never been more important than it is today.

As a nonprofit news organization, we cannot do it without you.

Your support enables us to keep our content free and accessible to the next generation of scientists and engineers. Invest in quality science journalism by donating today.

Read the original here:
The first black hole portrait got sharper thanks to machine learning - Science News Magazine

Iconic first black hole picture is now sharper, thanks to new machine-learning tech – USA TODAY

Humanity'sfirst image of a black hole has gotten a makeover.

The iconic picture of the supermassive black hole at the center of Messier 87,a giantgalaxy sitting 53 million light-years from Earth in the "nearby" Virgocluster,was first released in 2019. The M87 black hole appeared as a flaming, fuzzy doughnut-like object emerging from a darkbackdrop but now we have a sharper look.

The new image,published Thursday in a Astrophysical Journal Lettersstudy,gives us a refined look at the black hole which now looks like a skinner, bright orange ring with a clearer dark center.

According to the study, the image was reconstructed using new machine-learningtechnology called PRIMO. Scientists relied on the same data that was used to create the 2019 image originally obtained by anEvent Horizon Telescope collaboration in 2017.

'We have seen what we thought was unseeable': First photo of a black hole revealed in 2019

2022: Astronomers capture first image of the huge black hole at the center of the Milky Way

In 2017, a network of radio telescopes around the world formed "an Earth-sized virtual telescope with the power and resolution capable of observing the 'shadow'of a black holes event horizon," the National Science Foundation's NOIRLab notes.

While this allowed scientists to see incredible details, gaps remained. PRIMO has helped fill in the missing pieces.

"Since we cannot study black holes up close, the detail in an image plays a critical role in our ability to understand its behavior,"Lia Medeiros, an astrophysicist at the Institute for Advanced Study inNew Jersey andlead author of Thursday's study, said in theNOIRLab press release.

"The width of the ring in the image is now smaller by about a factor of two, which will be a powerful constraint for our theoretical models and tests of gravity."

'Uranus has never looked better': James Webb Space Telescope captures new image of ice giant

The machine learning technique also brings the possibility of further work on other images of celestial objects, NOIRLab notes includingSagittarius A*,the black hole at the center of our Milky Way galaxy. Astronomers revealedan image ofSagittarius A* in May 2022, which was also captured usingEHT data.

"The 2019 image was just the beginning,"Medeiros said. "If a picture is worth a thousand words, the data underlying that image have many more stories to tell. PRIMO will continue to be a critical tool in extracting such insights."

What's everyone talking about?Sign up for our trending newsletter to get the latest news of the day.

Contributing: Doyle Rice, USA TODAY. The Associated Press.

The rest is here:
Iconic first black hole picture is now sharper, thanks to new machine-learning tech - USA TODAY