Archive for the ‘Quantum Computer’ Category

#SpaceWatchGL Opinion: Quantum Technology and Impact of the Global Space Security – SpaceWatch.Global

by Rania Toukebri

Cyberattacks are exponentially increasing over time, improving the security of communications is crucial for guaranteeing the protection of sensitive information for states and individuals. For states, securing communications is mandatory for a strategic geopolitical influence.

Most technologies have been based on classical laws of physics. Modern communication technology transfers encrypted data with complex mathematical algorithms. The complexity of these algorithms ensures that a third parties cannot easily crack them. However, with stronger computing power and the increasing sophistication of hacking technologies, such methods of communication are increasingly vulnerable to interference. The worlds first quantum-enabled satellite is the Chinese Satellite (Micius). The purpose of the mission is to investigate space-based quantum communications for a couple of years in order to create future hack-proof communication networks.

In a classical computer, each processing is a combination of bits. A bit can either be zero or one. A qubit, the quantum bit, can be a zero and a one at the same time. So, processing qubits is processing several combinations of zeroes and ones simultaneously, and the increased speed of quantum computing comes from exploiting this parallelism.

According to quantum theory, subatomic particles can act as if they are in two places at once. This property is manipulated so that a particle can adopt either one of two states. If the particle is not observed, it will be in a state of superposition.

There have been successful quantum encryption experiments with some limitation. The messages were sent through optical fibers, the signal would be absorbed by the medium and then it wont be possible to make for long distance. Making such communications over long distances would require quantum repeaters that are devices that capture and retransmit the quantum information.

China found another solution by beaming entangled photons through the vacuum of space, so they wont be absorbed.

Micius satellite works by firing a laser through a crystal creating a pair in a state of entanglement. A half of each pair is sent to two separate stations on earth.

The objective of this method is to generate communication keys encrypted with an assembly of entangled photons. The information that will be transmitted will be encoded by a set of random numbers generated between the transmitter and the receiver. If a hacker tries to spy or interfere with one of the beams of entangled photons, the encryption key will be changed and will become unreadable due to the observer effect of Quantum theory. In consequence, the transmitter will be able to change the information in security.

The Quantum communication in Military and defense will enable China to be a strong leader in military sophistication and it will empower its geopolitical influence, decreasing by that the US authority.

China has already started the economic and technological development while US foreign policy is declining her dominance on the global geopolitical scene. Technically, Quantum technological development will speed up a multipolar power balance in international relations.

On another hand, USA is also making research on Quantum Technologies but the US investments remains limited compared to ones in China and Europe. Which is making China the leader in quantum communication. But the USA recognizes the importance of this filed and started making more efforts technically and financially. But the question remains, who will be able to reach the frontier before?

Following the Chinese space strategy, in the last years, China invested a lot in technological development including the pioneer space program, her aim was to reach a dominance in air and force. Micius satellite will be able to make a boom in military advancement and an information dominance. This space program is symptomatic to the Chinese strategy on technological development.

The first Chinese satellite was launched after USA and Russia in 1970. The strategy followed afterwards enhanced an exponential growth in space and technological development by a huge financial investment gained after an exponential economical growth. Beidou ( China space navigation satellite) provides precise geolocation information for Chinese weapon systems and communication coverage for its military. Which is a strength point on military and geopolitical aspects.

The policy is still going in that direction by having a global network coverage of 35 Chinese satellites. The Chinese space program launched already two space laboratories, its aim is the launch of a permanent manned space station in 2022 knowing that the international space station will retire before 2028.

In consequence, China would become the only country with a space station, making it necessary to the countries and in consequence a center of power. More Chinese space missions including robotics and AI took place, preparing for the next generation space technology. Quantum is the accelerator to reach the ultimate goal of this space program and then became the first priority in the technological researches. By 2030, China aims to establish a network of quantum satellites supporting a quantum internet.

The network of quantum satellites (2030 China Project) is aiming to increase the record distance for successful quantum entanglement between two points on Earth. Technically, the lasers being used to beam the entangled photons between the stations will have to achieve a high level of precision to reach the selected targets. But the limitations are:

Rania Toukebri is a Systems engineer for spacecrafts, Regional Coordinator for Africa in Space Generation Advisory Council in support of the United Nations, Space strategy consultant and Cofounder of HudumaPlus company.

See original here:
#SpaceWatchGL Opinion: Quantum Technology and Impact of the Global Space Security - SpaceWatch.Global

Neurals guide to the glorious future of AI: Heres how machines become sentient – The Next Web

Welcome to Neurals guide to the glorious future of AI. What wonders will tomorrows machines be capable of? How do we get from Alexa and Siri to Rosie the Robot and R2D2? In this speculative science series well put our optimist hats on and try to answer those questions and more. Lets start with a big one: The Singularity.

The future realization of robot lifeforms is referred to by a plethora of terms sentience, artificial general intelligence (AGI), living machines, self-aware robots, and so forth but the one that seems most fitting is The Singularity.

Rather than debate semantics, were going to sweep all those little ways of saying human-level intelligence or better together and conflate them to mean: A machine capable of at least human-level reasoning, thought, memory, learning, and self-awareness.

Modern AI researchers and developers tend to gravitate towards the term AGI. Normally, wed agree because general intelligence is grounded in metrics we can understand to qualify, an AI would have to be able to do most stuff a human can.

But theres a razor-thin margin between as smart as and smarter than when it comes to hypothetical general intelligence and it seems likely a mind powered by super computers, quantum computers, or a vast network of cloud servers would have far greater sentient potential than our mushy organic ones. Thus, well err on the side of superintelligence for the purposes of this article.

Before we can even start to figure out what a superintelligent AI would be capable of, however, we need to determine how its going to emerge. Lets make some quick decisions for the purposes of discussion:

So how will our future metal buddies gain the spark of consciousness? Lets get super scientific here and crank out a listicle with five separate ways AI could gain human-level intelligence and awareness:

In this first scenario, if we predict even a modest year-over-year increase in computation and error-correction abilities, it seems entirely plausible that machine intelligence could be brute-forced into existence by a quantum computer running strong algorithms in just a couple centuries or so.

Basically, this means the incredibly potent combination of exponentially increasing power and self-replicating artificial intelligence could cook up a sort of digital, quantum, primordial soup for AI where we just toss in some parameters and let evolution take its place. Weve already entered the era of quantum neural networks, a quantum AGI doesnt seem all that far-fetched.

What if intelligence doesnt require power? Sure, our fleshy bodies need energy to continue being alive and computers need electricity to run. But perhaps intelligence can exist without explicit representation. In other words: what if intelligence and consciousness can be reduced to purely mathematical concepts that only when properly executed became apparent?

A researcher by the name of Daniel Buehrer seems to think this could be possible. They wrote a fascinating research paper proposing the creation of a new form of calculus that would, effectively, allow an intelligent master algorithm to emerge from its own code.

The master algorithm idea isnt new the legendary Pedro Domingos literally wrote the book on the concept but what Buehrers talking about is a different methodology. And a very cool one at that.

Heres Buehrers take on how this hypothetical self-perpetuating calculus could unfold into explicit consciousness:

Allowing machines to modify their own model of the world and themselves may create conscious machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculuss model of the world and the results of what its robots actually caused to happen in the world.

They even go on to propose that such a consciousness would be capable of having little internal thought wars to determine which actions occurring in the machines minds eye should be effected into the physical world. The whole paper is pretty wild, you can read more here.

This ones pretty easy to wrap your head around (pun intended). Instead of a bunch of millionaire AI developers with billion-dollar big tech research labs figuring out how to create a new species of intelligent being out of computer code, we just figure out how to create a perfect artificial brain.

Easy right? The biggest upside here would be the potential for humans and machines to occupy the same spaces. This is clearly a recipe for augmented humans cyborgs. Perhaps we could become immortal by transferring our own consciousnesses into non-organic brains. But the bigger picture would be the ability to develop robots and AI in the true image of humans.

If we can figure out how to make a functional replica of the human brain, including the entire neural network housed within it, all wed need to do iskeep it running and shovel the right components and algorithms into it.

Maybe conscious machines are already here. Or maybe theyll quietly show up a year or a hundred years from now completely hidden in the background. Im talking about cloud consciousness and the idea that a self-replicating, learning AI created solely to optimize large systems could one day gain a form of sentience that would, qualitatively, indicate superintelligence but otherwise remain unnoticed by humans.

How could this happen? Imagine if Amazon Web Services or Google Search released a cutting-edge algorithm into their respective systems a few decades from now and it created its own self-propagating solution system that, through the sheer scope of its control, became self-aware. Wed have a ghost in the machine.

Since this self-organized AI system wouldnt have been designed to interface with humans or translate its interpretations of the world it exists in into something humans can understand, it stands to reason that it could live forever as a superintelligent, self-aware, digital entity without ever alerting us to its presence.

For all we know theres a living, sentient AI chilling out in the Gmail servers just gathering data on humans (note: there almost certainly isnt, but its a fun thought exercise).

Dont laugh. Of all the methods by which machines could hypothetically gain true intelligence, alien tech is the most likely to make it happen in our lifetimes.

Here we can make one of two assumptions: Aliens will either visit us sometime in the near future (perhaps to congratulate us on achieving quantum-based interstellar communication) or well discover some ancient alien technology once we put humans on Mars within the next few decades. These are the basicplots of Star Trek andthe Mass Effect video game series respectively.

Heres hoping that, no matter how The Singularity comes about, it ushers in a new age of prosperity for all intelligent beings.But just in case it doesnt work out so well, weve got something thatll help you prepare for the worst. Check out these articles in Neurals Beginners Guide to the AI Apocalypse series:

Published November 18, 2020 19:50 UTC

Excerpt from:
Neurals guide to the glorious future of AI: Heres how machines become sentient - The Next Web

One for the haters: Twitter considers adding a dislike button – The Next Web

Over the years there have been two missing components everyone on Twitter moans about: the notorious edit option and the dislike button. Well, it turns out we might be getting one of those in the future.

Responding to a tweet from security expert Jackie Singh, Twitter product lead Kayvon Beykpour revealed the company is exploring adding a dislike button to its platform but its simply not one of its most urgent priorities.

Instead, Twitter is currently concentrating its efforts on cutting the spread of inauthentic behavior, enhancing the safety of its users with better tools to curb and report harassment, and cracking down on misinformation that could have harmful effects on itsusers.

Anyone who actively uses Twitter already knows the company has spent a considerable amount of time on battling harassment and the spread of misinformation on its platform. Indeed, it has introduced a slewof features aimed at solving those two issues over the years.

More recently, the company shared it had labeled over 300,000 tweets for election misinformation, some of which were posted by none other than US President Donald Trump.

To be fair, Twitter has previously experimented with the idea of a dislike button, although not quite in the same way its like button works.

The company had briefly made it possible for users to report tweets they dont like, but it was impossible for other users to see a tally of the dislikes a tweet had received. Its unclear if Twitter is exploring any alternatives beyond this, but time will tell.

Until then, youll simply have to do with the good old ratio.

via Gizmodo

Read next: Google Chrome introduces tab search here's how to use it

Original post:
One for the haters: Twitter considers adding a dislike button - The Next Web

Eight technology trends that will disrupt the banking industry – Consultancy-me.com

Technology is rapidly transforming the way how banks operate and how they serve their customers, and becoming a key enabler of competitive edge. According to a new report by Deloittes Middle East Financial Services practice, eight emerging technologies are set to disrupt the banking industry in the coming years. An outline of the technologies and some of the key benefits they have to offer to the banking industry.

Cloud is an essential tool of todays service delivery model, and enables banks to penetrate new business opportunities and access new delivery channels. By leveraging cloud-based services, banks are able to decrease data storage costs through saving on capital expenditure (CAPEX) and operating expenditure (OPEX), while ensuring customer data is protected.

There are three types of cloud services:

Some of the key benefits of cloud-based working for banks include:

Big data refers to large and complex datasets that create significant challenges for traditional data management and analysis tools in practical timeframes. Using advanced analytics, banks can apply technology to efficiently extract valuable insights from data, and use those to improve the (strategic) decision-making process.

Benefits of Big Data analytics for the banking sector include:

Artificial Intelligence (AI) is now becoming a part of the business environment and is reinventing the entire ecosystem of the banking sector. By increasing the level of automation and using dynamic systems, AI supports decision-making, enhances the customer experience, and improves operational efficiency. AI also provides a strategic oversight for getting value out of data, which is now needed more than ever due to the data influx from a wide range of sources.

Benefits of artificial Intelligence in the banking sector include:

Deloitte foresees a growing appetite for AI investment across the Middle East. In fact, in one scenario, spending could reach over US$100 million in 2021.

Internet of Things is a technology which connects devices/sensors in a network with the aim of providing better data-driven insights. The banking sector started utilizing IoT relatively late compared to sectors such as energy and automotive. However, IoT has been gaining importance in financial services lately, especially in retail banks, which are showing large investments in IoT to be used in their internal infrastructure and consumer-facing capabilities.

The use of IoT devices will allow banks to collect massive stockpiles of customer data, ranging from their demographic details to their income and spending patterns, to their preferences. The access to this amount of data has the potential to drive fundamental change in the industry including increasing operational efficiency, preventing fraud, reducing nonperforming assets (NPAs), improving employee and customer efficiency, and facilitating easier verification, loan tracking, and customer retention.

The banking industry is mandating the use of intelligent automation to drive efficiency, eliminate repetition, and improve customer satisfaction by providing fast and efficient services. The technology behind this automation is called robotic process automation (RPA).

RPA is transforming how banks operate. Some key benefits of RPA in the banking industry:

Blockchain technology and its associated distributed ledgers were devised as a simple yet smart solution to keep track of the Bitcoin cryptocurrency in circulation. The solution leveraged a distributed ledger architecture under which all users who participated as nodes in the network had a copy of the entire ledger.

Benefits of Blockchain technology in the banking sector include:

A quantum computer is a new type of computer that harnesses the power of quantum mechanics to solve problems that were previously believed to be intractable on regular computers. In the banking sector, the authors predict four major use cases.

There still is some way to go however before quantum computing becomes a reality. According to Deloitte, the 2020s will likely be a time of progress in quantum computing, but the 2030s are the most likely decade for a larger market to develop.

Open Banking refers to the movement that banks work together in an ecosystem of (technology) partners. Banks broadly have four broad strategic options: full-service provider; utility; supplier; and marketplace interface.

These four options are not mutually exclusive. Two of these utility and supplier involve losing control of the customer interface as products and distribution become unbundled. However, organizations pursuing more than one option are likely to need to sharpen their own proposition for each option they pursue to remain competitive.

Open banking is poised to introduce a number of opportunities both for incumbents and new entrants:

In related news, according to another recent report by Deloittes Middle East Financial Services practice, the firm found that one fifth ofMiddle East bank holders now use FinTech solutions to bolster their experience and financial management.

View original post here:
Eight technology trends that will disrupt the banking industry - Consultancy-me.com

What’s Next In AI, Chips And Masks – SemiEngineering

Aki Fujimura, chief executive of D2S, sat down with Semiconductor Engineering to talk about AI and Moores Law, lithography, and photomask technologies. What follows are excerpts of that conversation.

SE: In the eBeam Initiatives recent Luminary Survey, the participants had some interesting observations about the outlook for the photomask market. What were those observations?

Fujimura: In the last couple of years, mask revenues have been going up. Prior to that, mask revenues were fairly steady at around $3 billion per year. Recently, they have gone up beyond the $4 billion level, and theyre projected to keep going up. Luminaries believe a component of this increase is because of the shift in the industry toward EUV. One question in the survey asked participants, What business impact will COVID have on the photomask market? Some people think it may be negative, but the majority of the people believe that its not going to have much of an effect or it might have a positive effect. At a recent eBeam Initiative panel, the panelists commented that the reason for a positive outlook might be because of the demand picture in the semiconductor industry. The shelter-in-place and work-from-home environments are creating more need and opportunities for the electronics and semiconductor industries.

SE: How will extreme ultraviolet (EUV) lithography impact mask revenues?

Fujimura: In general, two thirds of the participants in the survey believe that it will have a positive impact. When you go to EUV, you have a fewer number of masks. This is because EUV brings the industry back to single patterning. 193nm immersion with multiple patterning requires more masks at advanced nodes. With EUV, you have fewer masks, but mask costs for each EUV layer is more expensive.

SE: For decades, the IC industry has followed the Moores Law axiom that transistor density in chips doubles every 18 to 24 months. At this cadence, chipmakers can pack more and smaller transistors on a die, but Moores Law appears to be slowing down. What comes next?

Fujimura: The definition of Moores Law is changing. Its no longer looking at the trends in CPU clock speeds. Thats not changing much. Its scaling more by bit width than by clock speed. A lot of that has to do with thermal properties and other things. We have some theories on where we can make that better over time. On the other hand, if you look at things like massively parallel computing using GPUs or having more CPU cores and how quickly you can access memory or how much memory you can access if you include those things, Moores Law is very much alive. For example, D2S supplies computing systems for the semiconductor manufacturing industry, so we are also a consumer of technology. We do heavy supercomputing, so its important for us to understand whats happening on the computing capability side. What we see is that our ability to compute is continuing to improve at about the same rate as before. But as programmers we have to adapt how we take advantage of it. Its not like you can take the same code and it automatically scales like it did 20 years ago. You have to understand how that scaling is different at any given point in time. You have to figure out how you can take advantage of the strength of the new generation of technology and then shift your code. So its definitely harder.

SE: Whats happening with the logic roadmap?

Fujimura: Were at 5nm in terms of what people are starting to do now. They are starting to plan 3nm and 2nm. And in terms of getting to the 2nm node, people are pretty comfortable. The question is what happens beyond that. It wasnt too long ago that people were saying: Theres no way were going to have 2nm. Thats been the general pattern in the semiconductor industry. The industry is constantly re-inventing itself. It is extending things longer than people ever thought possible. For example, look how long 193nm optical lithography lasted at advanced nodes. At one time, people were waiting for EUV. There was once a lot of doom and gloom about EUV. But despite being late, companies developed new processes and patterning schemes to extend 193nm. It takes coordination by a lot of people to make this happen.

SE: How long can we extend the current technology?

Fujimura: Theres no question that there is a physical limit, but we are still good for the next 10 years.

SE: Theres a lot of activity around AI and machine learning. Where do you see deep learning fitting in?

Fujimura: Deep learning is a subset of machine learning. Its the subset thats made machine learning revolutionary. The general idea of deep learning is to mimic how the brain works with a network of neurons or nodes. The programmer first determines what kind of a network to use. The programmer then trains the network by presenting it with a whole bunch of data. Often, the network is trained by labeled data. Using defect classification as an example, a human or some other program labels each picture as being a defect or not, and may also label what kind of defect it is, or even how it should be repaired. The deep learning engine iteratively optimizes the weights in the network. It automatically finds a set of weights that would result in the network to best mimic the labels. Then, the network is tried on data that it wasnt trained on to test to see if the network learned as intended.

SE: What cant deep learning do?

Fujimura: Deep learning does not reason. Deep learning does pattern matching. Amazingly, it turns out that many of the worlds problems are solvable purely with pattern matching. What you can do with deep learning is a set of things that you just cant do with conventional programming. I was an AI student in the early 1980s. Many of the best computer scientists in the world back then (and ever since) already were trying hard to create a chess program that could beat the chess masters. It wasnt possible until deep learning came along. Applied to semiconductor manufacturing, or any field, there are classes of problems that had not been practically possible without deep learning.

SE: Years ago, there wasnt enough compute power to make machine learning feasible. What changed?

Fujimura: The first publication describing convolutional neural networks was in 1975. The researcher, Dr. Kunihiko Fukushima, called it neocognitron back then, but the paper basically describes deep learning. But computational capability simply wasnt sufficient. Deep learning was enabled with what I call useful waste in massive computations by cost-effective GPUs.

SE: What problems can deep learning solve?

Fujimura: Deep learning can be used for any data. For example, people use it for text-to-speech, speech-to-text, or automatic translation. Where deep learning is most evolved today is when we are talking about two-dimensional data and image processing. A GPU happens to be a good platform for deep learning because of its single instruction multiple data (SIMD) processing nature. The SIMD architecture is also good at image processing, so it makes sense that its applied in that way. So for any problem in which a human expert can look at a picture without any other background knowledge and tell something with high probability, deep learning is likely to be able to do well.

SE: What about machine learning in semiconductor manufacturing?

Fujimura: We have already started to see products incorporating deep learning both in software and equipment. Any tedious and error-prone process that human operators need to perform, particularly those involving visual inspection, are great candidates for deep learning. There are many opportunities in inspection and metrology. There are also many opportunities in software to produce more accurate results faster to help with the turnaround time issues in leading-edge mask shops. There are many opportunities in correlating big data in mask shops and machine log files with machine learning for predictive maintenance.

SE: What are the challenges?

Fujimura: Deep learning is only as good as the data that is being given, so caution is required in deploying deep learning. For example, if deep learning is used to screen resumes by learning from labels provided by prior hiring practices, deep learning learns the biases that are already built into the past practices, even if unintended. If operators tend to make a type of a mistake in categorizing an image, deep learning that learned from the data labeled by that operators past behavior would learn to make the same mistake. If deep learning is used to identify suspected criminal behavior in the street images captured by cameras on the street based on past history of arrests, deep learning will try the best it can to mimic the past behavior. If deep learning is used to identify what a social media user tends to want to see in order to maximize advertising revenues, deep learning will learn to be extremely good at showing the user exactly what the user tends to watch, even if it is highly biased, fake or inappropriate. If misused, deep learning can accentuate and accelerate human addiction and biases. Deep learning is a powerful weapon that relies on the humans wielding it to use it carefully.

SE: Is machine learning more accurate than a human in performing pattern recognition tasks?

Fujimura: In many cases, its found that a deep learning-based program can inference better with a higher percentage of accuracy than a human, particularly when you look at it over time. A human might be able to look at a picture and recognize it with a 99% accuracy. But if the same human has to look at a much larger data set, and do it eight hours a day for 200 days a year, the performance of the human is going to degrade. Thats not true for a computer-based algorithm, including deep learning. The learning algorithms process vast amounts of data. They go through small sections at a time and go through every single one without skipping anything. When you take that into account, deep learning programs can be useful for these error prone processes that are visually oriented or can be cast into being visually oriented.

SE: The industry is working on other technologies to replicate the functions of the brain. Neuromorphic computing is one example. How realistic is this?

Fujimura: The brain is amazing. It will take a long time to create a neural network of the actual brain. There are very interesting computing models in the future. Neuromorphic is not a different computing model. Its a different architecture of how you do it. Its unclear if neuromorphic computing will necessarily create new kinds of capabilities. It does make some of them more efficient and effective.

SE: What about quantum computing?

Fujimura: The big change is quantum computing. That takes a lot of technology, money and talent. Its not an easy technology to develop. But you can bet that leading technology countries are working on it, and there is no question in my mind that its important. Take security, for example. 256-bit encryption is nothing in basic quantum computing. Security mechanisms would have to be significantly revamped in the world of quantum computing. Quantum computing used in a wrong way can be destructive. Staying ahead of that is a matter of national security. But quantum computing also can be very powerful in solving problems that were considered intractable. Many iterative optimization problems, including deep learning training, will see major discontinuities with quantum computing.

SE: Lets move back to the photomask industry. Years ago, the mask was simple. Over time, masks have become more complex, right?

Fujimura: At 130nm or around there, you started to see decorations on the mask. If you wanted to draw a circle on the wafer using Manhattan or rectilinear shapes, you actually drew a square on the mask. Eventually, it would become a circle on the wafer. However, starting at around 130nm, that square on the mask had to be written with decorations in all four corners. Then, SRAFs (sub-resolution assist features) started to appear on the mask around 90nm. There might have been some at 130nm, but mostly at 90nm. By 22nm, you couldnt find a critical layer mask that didnt have SRAFs on them. SRAFs are features on the mask that are designed explicitly not to print on the wafer. Through an angle, SRAFs project light into the main features that you do want to print on a wafer enough so that it helps to augment the amount of energy thats being applied to the resist. Again, this makes the printing of the main features more resilient to manufacturing process variation.

SE: Then multiple patterning appeared around 16nm/14nm, right?

Fujimura: The feature sizes became smaller and more complex. When we reached the limit of resolution for 193i, there was no choice but to go to multiple patterning, where multiple masks printed one wafer layer. You divide the features that you want on a given wafer layer and you put them on different masks. This provided more space for SRAFs for each of the masks. EUV for some layers is projected to go to multiple patterning, too. It costs more to do multiple patterning, but it is a familiar and proven technique for extending lithography to smaller nodes.

SE: To pattern a photomask, mask makers use e-beam mask writer systems based on variable shaped beam (VSB) technology. Now, using thousands of tiny beams, multi-beam mask writers are in the market. How do you see this playing out?

Fujimura: Most semiconductor devices are being patterned using VSB writers for the critical layers. Thats working fine. The write times are increasing. If you look at the eBeam Initiatives recent survey, the average write times are still around 8 hours. Going forward, we are moving toward more complex processes with EUV masks. Today, EUV masks are fairly simple. Rectangular writing is enough. But you need multi-beam mask writers because of the resist sensitivity. The resists are slow in order to be more accurate. We need to apply a lot of energy to make it work, and that is better with multi-beam mask writers.

SE: Whats next for EUV masks?

Fujimura: EUV masks will require SRAFs, too. They dont today at 7nm. SRAFs are necessary for smaller features. And, for 193i as well as for EUV, curvilinear masks are being considered now for improvements in wafer quality, particularly in resilience to manufacturing variation. But for EUV in particular, because of the reflective optics, curvilinear SRAFs are needed even more. Because multi-beam mask writing enables curvilinear mask shapes without a write time penalty, the enhanced wafer quality in the same mask write time is attractive.

SE: What are the big mask challenges going forward?

Fujimura: There are still many. EUV pellicles, affordable defect-free EUV mask blanks, high- NA EUV, and actinic or e-beam-based mask inspection both in the mask shop and in the wafer shop for requalification are all important areas for advancement. Now, the need to adopt curvilinear mask shapes has been widely acknowledged. Data processing, including compact and lossless data representation that is fast to write and read, is an important challenge. Optical proximity correction (OPC) and inverse lithography technology (ILT), which are needed to produce these curvilinear mask shapes to maximize wafer performance, need to run fast enough to be practical.

SE: What are the challenges in developing curvilinear shapes on masks?

Fujimura: There are two issues. Without multi-beam mask writers, producing masks with curvilinear shapes can be too expensive or may practically take too long to write. Second, controlling the mask variation is challenging. Once again, the reason you want curvilinear shapes on the mask is because wafer quality improves substantially. That is even more important for EUV than in 193nm immersion lithography. EUV masks are reflective. So, there is also a 6-degree incidence angle on EUV masks. And that creates more desire to have curvilinear shapes or SRAFs. They dont print on wafer. They are printed on the mask in order to help decrease process variation on the wafer.

SE: What about ILT?

Fujimura: ILT is an advanced form of OPC that computes the desired mask shapes in order to maximize the quality of wafer lithography. Studies have shown that ILT in particular, unconstrained curvilinear ILT can produce the best results in terms of resilience to manufacturing variation. D2S and Micron recently presented a paper on the benefits of full-chip, curvilinear stitchless ILT with mask-wafer co-optimization for memory applications. This approach enabled more than a 2X improvement in process windows.

SE: Will AI play a big role in mask making?

Fujimura: Yes. In particular, with deep learning, the gap between a promising prototype and a production-level inference engine is very wide. While there was quite a bit of initial excitement over deep learning, the world still has not seen very much in production adoption of deep learning. A large amount of this comes from the need for data. In semiconductor manufacturing, data security is extremely important. So while a given manufacturer would have plenty of data of its own kind, a vendor of any given tool, whether software or equipment, has a difficult time getting enough customer data. Even for a manufacturer, creating new data say, a SEM picture of a defect can be difficult and time-consuming. Yet deep learning programming is programming with data, instead of writing new code. If a deep learning programmer wants to improve the success rate of an inference engine from 92% to 95%, that programmer needs to analyze the engine to see what types of data it needs to be additionally trained to make that improvement, then acquire many instances of that type of data, and then iterate. The only way this can be done efficiently and effectively is to have digital twins, a simulated environment that generates data instead of relying only on physical real sample data. Getting to 80% success rate can be done with thousands of collected real data. But getting to 95% success rate requires digital twins. It is the lack of this understanding that is preventing production deployment of deep learning in many potential areas. It is clear to me that many of the tedious and error-prone processes can benefit from deep learning. And it is also clear to me that acceleration of many computing tasks using deep learning will benefit the deployment of new software capabilities in the mask shop.

Related Stories

EUVs Uncertain Future At 3nm And Below

Challenges Linger For EUV

Mask/Lithography Issues For Mature Nodes

The Evolution Of Digital Twins

Next-Gen Mask Writer Race Begins

See the original post:
What's Next In AI, Chips And Masks - SemiEngineering