Archive for the ‘Quantum Computing’ Category

Amazons Werner Vogels: Enterprises are more daring than you might think – Protocol

When AWS unveiled Lambda in 2014, Werner Vogels thought the serverless compute service would be the domain of young, more tech-savvy businesses.

But it was enterprises that flocked to serverless first, Amazons longtime chief technology officer told Protocol in an interview last week.

For them, it was immediately obvious what the benefits were and how you only pay for the five microseconds that this code runs, and any idle is not being charged to you, Vogels said. And you don't have to worry about reliability and security and multi-[availability zone] and all these things that then go out of the window. That was really an eye-opener for me this idea that we sometimes have in our head that sort of the young businesses are more technologically advanced and moving faster. Clearly in the area of serverless, that was not the case.

AWS Lambda launched into general availability in 2015, and more than a million customers are using it today, according to AWS.

Vogels gave Protocol a rundown on AWS Lambda and serverless computing, which allows customers to build and run applications and services without provisioning or managing servers. He also talked about Amazon CodeWhisperer, AWS new machine learning-powered coding tool, launched in preview in June; how artificial intelligence and ML are changing developers lives; and his thoughts on AWS providing customers with primitives versus higher-level managed services.

This interview has been edited and condensed for clarity.

So what's the state of the state on AWS Lambda and how it's helping customers, and are there any new features that we can expect?

You'll see a whole range of different migrations happening. We've had folks from Capital One that migrated old mainframe codes to Lambda. [IRobot, which Amazon announced plans to acquire on Friday], the folks that make Roomba, the automatic [vacuum] cleaner, have their complete back end running as serverless because, for example, that's a service that their customers don't pay for, and as such, they really wanted to minimize their costs yet provide a good service. There's a whole range of different projects happening and whether that is pre-processing images at some telescope deep in Chile, all the way up to monitoring Snowcones running in the International Space Station, where they were in Lambda on that device as well and actually can do processing of imagery and things like that. It's become quite pervasive in that sense.

Now, the one thing is, of course, if you have existing code, and you want to move over to the cloud moving over to a virtual machine is easy it's all in the same environment that you had on-premises. If you want to decompose the application that you had, don't want to do too many code changes, probably containers are a better target for that.

But for quite a few of our customers that really want to start from scratch, but sort of really innovate and really think about [what] event-driven architectures look like, serverless becomes quickly the sudden default target for them. Mostly also because it's not only that we see significant reduction in cost for our customers, but also a significant reduction in their carbon footprints, because we're able to do much better packing on energy than customers would be able to do by themselves. We now also run serverless on our Graviton processors, so you'll see easily a 40% reduction in cost in energy usage.

For me, serverless means that our customers don't have to think about security, reliability, managing performance, managing scale, doing failover all those kinds of things and really controlling costs.

But always I'm a bit ambivalent about the word serverless, mostly because many people associate that with when we launched Lambda. But in essence, the first service that we launched, S3, also is really serverless. For me, serverless means that our customers don't have to think about security, reliability, managing performance, managing scale, doing failover all those kinds of things and really controlling costs. And so, in essence, almost all services at AWS are serverless by nature. If you think about DynamoDB [a serverless NoSQL database], or if you think about Neptune [a graph database service] or any of the other services that we have, most of them are serverless because you don't have to think about sort of provisioning them, managing them. That's all done for you.

Can you talk about the value of CodeWhisperer and what you think is the next big thing for or the future of low-code/no-code?

For me, CodeWhisperer is more an assistant to a developer. There's a number of application areas where I think machine learning really shines and it is sort of augmenting professionals by helping them, taking away mundane tasks. And we already did that, of course, in AWS. If you think about development, there's CodeGuru and DevOps Guru, which are both already machine-learning services to help customers with, on one hand, operations, and the other one sort of doing the early security checks during the development process.

CodeWhisperer even takes that a step further, where if you look how our developers develop, there's quite a few mundane tasks where you will go search on the web for a piece of code how do we do [single sign-on] login into X, Y or Z? Most people will just cut and paste or do a little translation. If that was in Python and you need to actually write it in TypeScript, we may do a translation on that.

There's a lot of work, actually, that developers do in that particular area. So we thought that we could really help our customers there by using machine learning to look at the complete base of, on one hand, the AWS code, the Amazon code and all the open-source code that is out there, and then do a qualitative test on that, and then include it into this body of work where we can easily help customers by just writing some plain text, and then saying, I want a [single sign-on] log-on here, and then the code automatically appears. And with that, we can do checks for security, we can do checks for bias. There's lots of other things that are now possible because we're basically assisting the developer in being more efficient and actually writing the code that they really want to write.

When we launched Lambda, I said the only code that will be written in the future is business logic. Well, it turns out we're still not completely there, but tools like CodeWhisperer definitely help us to get on that path because you can focus on what's the unique code that you need to write for the application that you have, instead of the same code that everybody else needs to write.

People really like it. It's also something that we continuously improve. This is not a standing-still product. As we look at more code, as we get more feedback, the service improves.

If I think about software developers, it's one of the few jobs in the world where you can be truly creative and can go to work and create something new every morning. However, there's quite a bit of heavy lifting still around that [that] sort of has nothing to do with your creativity or your ability to solve problems. With CodeWhisperer, we really tried to take the heavy lifting away so that people can focus on the creativity part of the development job, and I think anything we can do there, developers like.

In your tech predictions for 2022, you said this is the year when artificial intelligence and machine learning take on the undifferentiated heavy lifting in the lives of developers. Can you just expand on that, and how AWS is helping that?

When you think about CodeWhisperer and CodeGuru and DevOps Guru or Copilot from GitHub this is just the beginning of seeing the application area of machine learning to augment humans. Whether there is a radiologist somewhere that is late at night looking at imagery and gets help from machine learning to compare these images or whether it's a developer, we're really at the cusp of how machine learning will accelerate the way that we can build digital systems.

I was in Germany not that long ago, and there the government told me that they have 80,000 open IT positions. With all the scarceness in the world of labor, anything which we can do to make the life of developers easier so that they're more productive, that it makes it easier for people that do not have a four-year computer science degree to actually get started in the IT world, anything we can do there will benefit all the enterprises in the world.

What's another developer problem that you're trying to solve, or what are developers asking AWS for?

If you're an organization like AWS or Amazon or quite a few other organizations around the world, you make use of the DevOps principle, where basically your developers also have operational tasks. If you do operations, there's information that is coming from 10 or 20 different sides. There's log files, there's metrics, there's dashboards and actually tying that information together and analyzing the massive amounts of log files that are being produced by systems in real time, surfacing that to the operators, showing that there may be potential problems here and then give context around it because normally these log files are pretty cryptic. So what we do with DevOps Guru, for example, is provide context around it such that the operators can immediately start taking action, looking for what [the] root cause of particular problems are. So we're looking at all of the different aspects of development and operations to see what are the kind of things that we can build to help customers there.

At AWS re:Invent last year, you put up a slide that read primitives, not frameworks, and you said AWS gives customers primitives or simple machines, not frameworks. Meanwhile, Google Cloud and Microsoft are offering these sort of larger, chunkier blocks such as managed services where customers don't have to do the heavy lifting, and AWS also seems to be selling more of them as well.

Let me clarify that. It mostly has to do also with sort of the speed of innovation of AWS.

Last year, we launched more than 3,000 features and services. And so why are we still looking at these fine-ingrained building blocks? Let me go back to the beginning of AWS when we started then, how software companies at that moment were providing infrastructure or platforms was basically that they would give developers everything [but] the kitchen sink on Day One. And they would tell you, "This is how you shall develop software on this platform." Given that these platforms took quite a while to develop, basically what you operate is a platform that is already five years old, that is looking at five years back.

Werner Vogels gives his keynote at AWS re:Invent 2021. Photo: Amazon Web Services, Inc.

We knew that if cloud would really be effective, development would change radically. Development would indeed be able to scale quicker and make use of multiple availability zones and many different types of databases and things like that. So we needed to make sure that we were not building things from the past, but that we were building for how our customers would want to build in 2025. To do that, you don't give them everything and tell them what to do. You give them small building blocks, and that's what I mean by primitives. And all these small building blocks together make a very rich ecosystem for developers to choose from.

Now, quite a few, especially the more tech-savvy companies, are more than happy to put these building blocks together themselves. For example, if you want to build a data lake, we have to use Glue [a serverless data integration service], we have to use S3, maybe some Redshift, Kinesis for ingestion, Athena for ad hoc analytics. I think there's quite a few customers that are building these things by themselves.

But then there's a whole category of customers that just want a data lake. They don't want to think about Glue and S3 and Kinesis, so we give them a service or solution called Lake Formation. That automatically grabs all these things together and gives them this higher-level component.

Now the fact that we are delivering these higher-level solutions, for example, some customers just want a backup solution, and they don't want to think about how to move things into S3 and then do some intelligent tiering [so] that if this data isn't accessed in two weeks, then it is being moved into cold storage. They don't want to think about that. They just want a backup solution. And so for that, we provide them some backup. So we do have these higher-level services. It's more managed-style services for you, but they're all still based on the primitives that sit underneath there. So whether you want to start with Lake Formation and later on maybe start tweaking things under the covers, that's still possible for you. While we are providing these higher-level components, where customers need to have less worry about which components can fit together, we still provide the underlying components to the developers as well.

Is quantum computing something that enterprise CTOs should be keeping their eye on? Do you expect there to be an enterprise use for it, or will it be a domain just for researchers, or is it just too far out to surmise?

There is a back-and-forth there. If I look at some of the newer developments, it's clearly research oriented. The reason for us to provide Braket, which is our quantum compute service, is that customers generally start experimenting with the different types of hardware that are out there. And there's typical usage there. It's life sciences, it's oil and gas. All of these companies are already investigating whether they could see significant speed-ups if they would transform their algorithms into things that could run on a quantum machine.

Now, there's a major difference between, let's say, traditional development and quantum development. The tools, the compilers, the software principles, the books, the documentation for traditional development that's huge, you need great support.

In quantum, I think what we'll see in the coming four or five years, as I listen to the Amazon researchers working on this, [is that] much of the work will not only go into hardware, but also how to provide better software support around it, such that development for these types of machines becomes easier or even goes at the same level as traditional machines. But one of the things that I think is very, very clear is that we're not going to be able to solve new problems necessarily with quantum computing; we're just going to be able to solve old problems much, much faster. That's why the life sciences companies and health care and companies that are very interested in the high-performance compute are experimenting with quantum because that could accelerate their algorithms, maybe by orders of magnitude. But, we still have to see the results of that. So I'm keeping a very close eye on it, because I think there may be very interesting workloads and application areas in the future.

Read more:
Amazons Werner Vogels: Enterprises are more daring than you might think - Protocol

FM holds talks with US NSF chief, discusses collaboration in science and technology – Devdiscourse

Finance Minister Nirmala Sitharaman on Sunday met Director of the US National Science Foundation (NSF) Sethuraman Panchanathan and discussed fostering ties in domains such as artificial intelligence, space, agriculture and health. The two sides discussed areas of collaboration related to science and technology (S&T) which emerged during the meeting between Prime Minister Narendra Modi and US President Joe Biden during the QUAD Summit in Tokyo in May, the finance ministry said in a series of tweets. ''Both sides emphasised to further enhance and strengthen the time-tested, democratic & value-based mutual partnership in specific domains such as artificial intelligence data science quantum computing, space, agriculture and health,'' it added. Panchanathan indicated that many projects will be launched soon in association with the Department of Science and Technology under six technology innovation hubs. ''While talking about the mission and objectives of @NSF, @DrPanch elaborated on achieving innovation at speed and scale with inclusion and solution-based approach in research,'' the ministry tweeted. ''Union Finance Minister Smt. @nsitharaman talked about India's achievement in fostering innovation through #AtalInnovationMission, #Start-upIndia, #StandUpIndia, reforms in patent processes and advancement of appropriate technology in agriculture,'' it added.

(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)

See the rest here:
FM holds talks with US NSF chief, discusses collaboration in science and technology - Devdiscourse

CXL Brings Datacenter-sized Computing with 3.0 Standard, Thinks Ahead to 4.0 – HPCwire

A new version of a standard backed by major cloud providers and chip companies could change the way some of the worlds largest datacenters and fastest supercomputers are built.

The CXL Consortium on Tuesday announced a new specification called CXL 3.0 also known as Compute Express Link 3.0 that eliminates more chokepoints that slow down computation in enterprise computing and datacenters.

The new spec provides a communication link between chips, memory and storage in systems, and it is two times faster than its predecessor called CXL 2.0.

CXL 3.0 also has improvements for more fine-grained pooling and sharing of computing resources for applications such as artificial intelligence.

CXL 3.0 is all about improving bandwidth and capacity, and can better provision and manage computing, memory and storage resources, said Kurt Lender, the co-chair of the CXL marketing work group (and senior ecosystem manager at Intel), in an interview with HPCwire.

Hardware and cloud providers are coalescing around CXL, which has steamrolled other competing interconnects. This week, OpenCAPI, an IBM-backed interconnect standard, merged with CXL Consortium, following the footsteps of Gen-Z, which did the same in 2020.

CXL released the first CXL 1.0 specification in 2019, and quickly followed it up with CXL 2.0, which supported PCIe 5.0, which is found in a handful of chips such as Intels Sapphire Rapids and Nvidias Hopper GPU.

The CXL 3.0 spec is based on PCIe 6.0, which was finalized in January. CXL has a data transfer speed of up to 64 gigatransfers per second, which is the same as PCIe 6.0.

The CXL interconnect can link up chips, storage and memory that are near and far from each other, and that allows system providers to build datacenters as one giant system, said Nathan Brookwood, principal analyst at Insight 64.

CXLs ability to support the expansion of memory, storage and processing in a disaggregated infrastructure gives the protocol a step-up over rival standards, Brookwood said.

Datacenter infrastructures are moving to a decoupled structure to meet the growing processing and bandwidth needs for AI and graphics applications, which require large pools of memory and storage. AI and scientific computing systems also require processors beyond just CPUs, and organizations are installing AI boxes, and in some cases, quantum computers, for more horsepower.

CXL 3.0 improves bandwidth and capacity with better switching and fabric technologies, CXL Consortiums Lender said.

CXL 1.1 was sort of in the node, then with 2.0, you can expand a little bit more into the datacenter. And now you can actually go across racks, you can do decomposable or composable systems, with the fabric technology that weve brought with CXL 3.0, Lender said.

At the rack level, one can make CPU or memory drawers as separate systems, and improvements in CXL 3.0 provide more flexibility and options in switching resources compared to previous CXL specifications.

Typically, servers have a CPU, memory and I/O, and can be limited in physical expansion. In disaggregated infrastructure, one can take a cable to a separate memory tray through a CXL protocol without relying on the popular DDR bus.

You can decompose or compose your datacenter as you like it. You have the capability of moving resources from one node to another, and dont have to do as much overprovisioning as we do today, especially with memory, Lender said, adding its a matter of you can grow systems and sort of interconnect them now through this fabric and through CXL.

The CXL 3.0 protocol uses the electricals of the PCI-Express 6.0 protocol, along with its protocols for I/O and memory. Some improvements include support for new processors and endpoints that can take advantage of the new bandwidth. CXL 2.0 had single-level switching, while 3.0 has multi-level switching, which provides more latency on the fabric.

You can actually start looking at memory like storage you could have hot memory and cold memory, and so on. You can have different tiering and applications can take advantage of that, Lender said.

The protocol also accounts for the ever-changing infrastructure of datacenters, providing more flexibility on how system administrators want to aggregate and disaggregate processing units, memory and storage. The new protocol opens more channels and resources for new types of chips that include SmartNICs, FPGAs and IPUs that may require access to more memory and storage resources in datacenters.

HPC composable systems youre not bound by a box. HPC loves clusters today. And [with CXL 3.0] now you can do coherent clusters and low latency. The growth and flexibility of those nodes is expanding rapidly, Lender said.

The CXL 3.0 protocol can support up to 4,096 nodes, and has a new concept of memory sharing between different nodes. That is an improvement from a static setup in older CXL protocols, where memory could be sliced and attached to different hosts, but could not be shared once allocated.

Now we have sharing where multiple hosts can actually share a segment of memory. Now you can actually look at quick, efficient data movement between hosts if necessary, or if you have an AI-type application that you want to hand data from one CPU or one host to another, Lender said.

The new feature allows peer-to-peer connection between nodes and endpoints in a single domain. That sets up a wall in which traffic can be isolated to move only between nodes connected to each other. That allows for faster accelerator-to-accelerator or device-to-device data transfer, which is key in building out a coherent system.

If you think about some of the applications and then some of the GPUs and different accelerators, they want to pass information quickly, and now they have to go through the CPU. With CXL 3.0, they dont have to go through the CPU this way, but the CPU is coherent, aware of whats going on, Lender said.

The pooling and allocation of memory resources is managed by a software called Fabric Manager. The software can sit anywhere in the system or hosts to control and allocate memory, but it could ultimately impact software developers.

If you get to the tiering level, and when you start getting all the different latencies in the switching, thats where there will have to be some application awareness and tuning of application. I think we certainly have that capability today, Lender said.

It could be two to four years before companies start releasing CXL 3.0 products, and the CPUs will need to be aware of CXL 3.0, Lender said. Intel built in support for CXL 1.1 in its Sapphire Rapids chip, which is expected to start shipping in volume later this year. The CXL 3.0 protocol is backward compatible with the older versions of the interconnect standard.

CXL products based on earlier protocols are slowly trickling into the market. SK Hynix this week introduced its first DDR5 DRAM-based CXL (Compute Express Link) memory samples, and will start manufacturing CXL memory modules in volume next year. Samsung has also introduced CXL DRAM earlier this year.

While products based on CXL 1.1 and 2.0 protocols are on a two-to-three-year product release cycle, CXL 3.0 products could take a little longer as it takes on a more complex computing environment.

CXL 3.0 could actually be a little slower because of some of the Fabric Manager, the software work. Theyre not simple systems when you start getting into fabrics, people are going to want to do proof of concepts and prove out the technology first. Its going to probably be a three-to-four year timeframe, Lender said.

Some companies already started work on CXL 3.0 verification IP six to nine months ago, and are finetuning the tools to the final specification, Bender said.

The CXL has a board meeting in October to discuss the next steps, which could also involve CXL 4.0. The standards organization for PCIe, called the PCI-Special Interest Group, last month announced it was planning PCIe 7.0, which increases the data transfer speed to 128 gigatransfers per second, which is double that of PCIe 6.0.

Lender was cautious about how PCIe 7.0 could potentially fit into a next-generation CXL 4.0. CXL has its own set of I/O, memory and cache protocols.

CXL sits on the electricals of PCIe so I cant commit or absolutely guarantee that [CXL 4.0] will run on 7.0. But thats the intent to use the electricals, Lender said.

Under that case, one of the tenets of CXL 4.0 will be to double the bandwidth by going to PCIe 7.0, but beyond that, everything else will be what we do more fabric or do different tunings, Lender said.

CXL has been on an accelerated pace, with three specification releases since its formation in 2019. There was confusion in the industry on the best high-speed, coherent I/O bus, but the focus has now coagulated around CXL.

Now we have the fabric. There are pieces of Gen-Z and OpenCAPI that arent even in CXL 3.0, so will we incorporate those? Sure, well look at doing that kind of work moving forward, Lender said.

Link:
CXL Brings Datacenter-sized Computing with 3.0 Standard, Thinks Ahead to 4.0 - HPCwire

Quantum computing and the Australians on the cutting edge – 9News

Fans of Marvel movies know the word 'quantum' too well.

It's the name of the realm the Avengers used to time travel and fantastical as that is, the concept of quantum mechanics is far from fiction.

Scientists have toyed with the idea since the 1920s in an attempt to explain the mysteries of our universe that can not be explained by traditional physics.

The University of Sydney (USYD) and University of New South Wales Sydney (UNSW) are among Google's new partners, which already included Macquarie University (MQ) and the University of Technology (UTS).

Associate Professor Ivan Kassal, from USYD believes advancements in quantum chemistry could develop life saving medicines and help predict the impact of atmospheric matter on our climate.

"Simulating chemistry is likely to be one of the first applications of quantum computers, and my goal is to develop the quantum algorithms that will allow near-term quantum computers to give us insights into chemical processes that are too complicated to simulate on any classical supercomputer," Kassal said.

Those are very physical problems to solve, but the potential of quantum computers could also speed up solving systems, crack cryptography and enable new applications of machine learning.

Australia's Chief Scientist, Dr Cathy Foley said Google's interest in Australia is "testament to the world class research that has been supported by the Australian Research Council for over two decades".

"I am delighted that Google sees Australia as somewhere to do quantum research. A step in building Australia's quantum industry here," said Dr Foley.

Google is building its quantum research team in Sydney, including its newly-appointed quantum computing scientist, Dr Marika Kieferova.

Professor Michael Bremner of UTS said one of this biggest challenges in quantum computing "is understanding which applications quantum computers can deliver performance that goes beyond classical computing."

"In this project, my team at UTS will work with Google on this problem, examining the mathematical structures that drive quantum algorithms to go beyond classical computing," Professor Michael Bremner, UTS

Original post:
Quantum computing and the Australians on the cutting edge - 9News

USC’s Biggest Wins in Computing and AI – USC Viterbi | School of Engineering – USC Viterbi School of Engineering

USC has been an animating force for computing research since the late 1960s.

With the advent of the USC Information Sciences Institute (ISI) in 1972 and the Department of Computer Science in 1976 (born out of the Ming Hsieh Department of Electrical and Computer Engineering), USC has played a propulsive role in everything from the internet to the Oculus Rift to recent Nobel Prizes.

Here are seven of those victories reimagined as cinemagraphs still photographs animated by subtle yet remarkable movements.

Cinemagraph: Birth of .Com

1. The Birth of the .com (1983)

While working at ISI, Paul Mockapetris and Jon Postel pioneered the Domain Name System, which introduced the .com, .edu, .gov and .org internet naming standards.

As Wired noted on the 25th anniversary, Without the Domain Name System, its doubtful the internet could have grown and flourished as it has.

The DNS works like a phone book for the internet, automatically translating text names, which are easy for humans to understand and remember, to numerical addresses that computers need. For example, imagine trying to remember an IP address like 192.0.2.118 instead of simply usc.edu.

In a 2009 interview with NPR, Mockapetris said he believed the first domain name he ever created was isi.edu for his employer, the (USC) Information Sciences Institute. That domain name is still in use today.

Grace Park, B.S. and M.S. 22 in chemical engineering, re-creates Len Adlemans famous experiment.

2. The Invention of DNA Computing (1994)

In a drop of water, a computation took place.

In 1994, Professor Leonard Adleman, who coined the term computer virus, invented DNA computing, which involves performing computations using biological molecules rather than traditional silicon chips.

Adleman who received the 2002 Turing Award, often called the Nobel Prize of computer science saw that a computer could be something other than a laptop or machine using electrical impulses. After visiting a USC biology lab in 1993, he recognized that the 0s and 1s of conventional computers could be replaced with the four DNA bases: A, C, G and T. As he later wrote, a liquid computer can exist in which interacting molecules perform computations.

As the New York Times noted in 1997: Currently the worlds most powerful supercomputer sprawls across nearly 150 square meters at the U.S. governments Sandia National Laboratories in New Mexico. But a DNA computer has the potential to perform the same breakneck-speed computations in a single drop of water.

Weve shown by these computations that biological molecules can be used for distinctly non-biological purposes, Adleman said in 2002. They are miraculous little machines. They store energy and information, they cut, paste and copy.

Professor Maja Matari with Blossom, a cuddly, robot companion to help people with anxiety and depression practice breathing exercises and mindfulness.

3. USC Interaction Lab Pioneers Socially Assistive Robotics (2005)

Named No. 5 by Business Insider as one of the 25 Most Powerful Women Engineers in Tech, Maja Matari leads the USC Interaction Lab, pioneering the field of socially assistive robotics (SAR).

As defined by Matari and her then-graduate researcher David Feil-Seifer 17 years ago, socially assistive robotics was envisioned as the intersection of assistive robotics and social robotics, a new field that focuses on providing social support for helping people overcome challenges in health, wellness, education and training.

Socially assistive robots have been developed for a broad range of user communities, including infants with movement delays, children with autism, stroke patients, people with dementia and Alzheimers disease, and otherwise healthy elderly people.

We want these robots to make the user happier, more capable and better able to help themselves, said Matari, the Chan Soon-Shiong Chair and Distinguished Professor of Computer Science, Neuroscience and Pediatrics at USC. We also want them to help teachers and therapists, not remove their purpose.

The field has inspired investments from federal funding agencies and technology startups. The assistive robotics market is estimated to reach $25.16 billion by 2028.

Is the ball red or blue? Is the cat alive or dead? Professor Daniel Lidar, one of the worlds top quantum influencers, demonstrates the idea of superposition.

4. First Operational Quantum Computing System in Academia (2011)

Before Google or NASA got into the game, there was the USC-Lockheed Martin Quantum Computing Center (QCC).

Led by Daniel Lidar, holder of the Viterbi Professorship in Engineering, and ISIs Robert F. Lucas (now retired), the center launched in 2011. With the worlds first commercial adiabatic quantum processor, the D-Wave One, USC is the only university in the world to host and operate a commercial quantum computing system.

As USC News noted in 2018, quantum computing is the ultimate disruptive technologyit has the potential to create the best possible investment portfolio, dissolve urban traffic jams and bring drugs to market faster. It can optimize batteries for electric cars, predictions for weather and models for climate change.Quantum computing can do this, and much more, because it can crunch massive data and variables and do it quickly with advantage over classical computers as problems get bigger.

Recently, QCC upgraded to D-Waves Advantage system, with more than 5,000 qubits, an order of magnitude larger than any other quantum computer. The upgrades will enable QCC to host a new Advantage generation of quantum annealers from D-Wave and will be the first Leap quantum cloud system in the United States. Today, in addition to Professor Lidar one of the worlds top quantum computing influencers QCC is led by Research Assistant Professor Federico Spedalieri, as operations director, and Research Associate Professor Stephen Crago, associate director of ISI.

David Traum, a leader at the USC Institute for Creative Technologies (ICT), converses with Pinchas Gutter, a Holocaust survivor, as part of the New Dimensions in Testimony.

5. USC ICT Enables Talking with the Pastin the Future (2015)

New Dimensions in Testimony, a collaboration between the USC Shoah Foundation and the USC Institute for Creative Technologies (ICT), in partnership with Conscience Display, is an initiative to record and display testimony in a way that will continue the dialogue between Holocaust survivors and learners far into the future.

The project uses ICTs Light Stage technology to record interviews using multiple high-end cameras for high-fidelity playback. The ICT Dialogue Groups natural language technology allows fluent, open-ended conversation with the recordings. The result is a compelling and emotional interactive experience that enables viewers to ask questions and hear responses in real-time, lifelike conversation even after the survivors have passed away.

New Dimensions in Testimony debuted in the Illinois Holocaust Museum & Education Center in 2015. Since then, more than 50 survivors and other witnesses have been recorded and presented in dozens of museums around the United States and the world. It remains a powerful application of AI and graphics to preserve the stories and lived experiences of culturally and historically significant figures.

Eric Rice and Bistra Dilkina are co-directors of the Center for AI in Society (CAIS), a remarkable collaboration between the USC Dworak-Peck School of Social Work and the USC Viterbi School of Engineering.

6. Among the First AI for Good Centers in Higher Education (2016)

Launched in 2016, the Center for AI in Society (CAIS) became one of the pioneering AI for Good centers in the U.S., uniting USC Viterbi and the USC Suzanne Dworak-Peck School of Social Work.

In the past, CAIS used AI to prevent the spread of HIV/AIDS among homeless youth. In fact, a pilot study demonstrated a 40% increase in homeless youth seeking HIV/AIDS testing due to an AI-assisted intervention. In 2019, the technology was also used as part of the largest global deployment of predictive AI to thwart poachers and protect endangered animals.

Today, CAIS fuses AI, social work and engineering in unique ways, such as working with the Los Angeles Homeless Service Authority to address homelessness; battling opioid addiction; mitigating disasters like heat waves, earthquakes and floods; and aiding the mental health of veterans.

CAIS is led by co-directors Eric Rice, a USC Dworak-Peck professor of social work, and Bistra Dilkina, a USC Viterbi associate professor of computer science and the Dr. Allen and Charlotte Ginsburg Early Career Chair.

Pedro Szekely, Mayank Kejriwal and Craig Knoblock of the USC Information Sciences Institute (ISI) are at the vanguard of using computer science to fight human trafficking.

7. AI That Fights Modern Slavery (2017)

Beginning in 2017, a team of researchers at ISI led by Pedro Szekely, Mayank Kejriwal and Craig Knoblock created software called DIG that helps investigators scour the internet to identify possible sex traffickers and begin the process of capturing, charging and convicting them.

Law enforcement agencies across the country, including in New York City, have used DIG as well as other software programs spawned by Memex, a Defense Advanced Research Projects Agency (DARPA)-funded program aimed at developing internet search tools to help investigators thwart sex trafficking, among other illegal activities. The specialized software has triggered more than 300 investigations and helped secure 18 felony sex-trafficking convictions, according to Wade Shen, program manager in DARPAs Information Innovation Office and Memex program leader. It has also helped free several victims.

In 2015, Manhattan District Attorney Cyrus R. Vance Jr. announced that DIG was being used in every human trafficking case brought by the DAs office. With technology like Memex, he said, we are better able to serve trafficking victims and build strong cases against their traffickers.

This is the most rewarding project Ive ever worked on, said Szekely. Its really made a difference.

Published on July 28th, 2022

Last updated on July 28th, 2022

The rest is here:
USC's Biggest Wins in Computing and AI - USC Viterbi | School of Engineering - USC Viterbi School of Engineering