Archive for the ‘Alphago’ Category

SysMoore: The Next 10 Years, The Next 1,000X In Performance – The Next Platform

What is the most important product that comes out of the semiconductor industry?

Here is a hint: It is inherent to the market, but enhanced by a positively reinforcing feedback loop of history. Here is another hint: You cant hold it in your hand, like an A0 stepping of a device, and you cant point at it like a foundry with the most advanced manufacturing processes created from $15 billion to $20 billion worth of concrete, steel, and wafer etching equipment and a whole lotta people in bunny suits.

No, the most important thing that the semiconductor industry delivers and has consistently delivered for over five decades is optimism. And unlike a lot of chips these days, there is no shortage of it despite the serious challenges that the industry is facing.

By optimism we do not mean the kind of future poisoning that company founders and chief executives sometimes succumb to when they spend too much time in the future that is not yet here without seeing the consequences of the technologies they are in the process of inventing. And we certainly do not mean the zeal that others exhibit when they think that information technology can solve all of our problems. It cant, and it often makes some things worse as it is making other things better, as all technologies have done since humanity first picked up a stick. It is the arm that swings the stick both ways to plant a seed or to crush a skull. So it is with the Internet, social media, artificial intelligence, and so on.

The optimism that we are speaking of in the semiconductor industry is usually stripped bare of such consequences, with the benefits all emphasized and the drawbacks mostly ignored except possibly when considering the aspects of climate change and how compute, storage, and networking are an increasingly large part of our lives, and something that represents an ever-enlargening portion of business and personal budgets and consequently an embiggening part of the energy consumption on the planet. Semiconductor makers turn this drawback more computers requiring more power and cooling into a cause for driving innovation as hard as it can be done.

The irony is that we will need some of the most power-hungry systems the world has ever seen to simulate the conditions that will prove how climate change will affect us collectively and here is the important bit individually. How will you feel when you can drill down into a simulation, for a modest fee of course, and see a digital twin of your home being destroyed by a predicted hurricane two years from now? Or an earthquake, or a fire, or a tsunami? What is true of the Earth simulation will be as true for your body simulation and your consequent healthcare.

If the metaverse means anything, it means using HPC and AI to make general concepts extremely personal. We dont know that the world was hell bent to adopt the 24 hour news cycle and extreme entertainment optionality of cable television, or the Web, or social networks, but what we do know is that most of us ended up on these platforms anyway. And what seems clear is that immersive, simulated experiences are going to be normalized, are going to be a tool in all aspects of our lives, and that the race is on to develop the technologies that will get us there.

It would be hard to find someone more genuine and more optimistic about the future of the semiconductor industry than Aart de Geus, co-founder, chief executive officer, and chairman of electronic design automation tool maker Synopsys, who gave the opening keynote at the ISSCC 2022 chip conference, which was hosted online this week. We read the paper that de Geus presented and watched the keynote as well, and will do our best to summarize the tour de force in semiconductor history and prognostication as we enter in what de Geus called the SysMoore Era the confluence of Moores Law ambitions in transistor design and now packaging coupled to systemic complexity that together will bring about a 1,000X increase in compute across devices and systems of all kinds and lead to a smart everything world.

Here is de Geus showing the well familiar exponential plot of the transistor density of CPUs, starting with the Intel 4004 in 1971 and running all the way out five decades later to the Intel Ponte Vecchio GPU complex, with 47 chiplets lashing together 100 billion transistors, and the Cerebras WSE 2 wafer-scale processor, with 2.5 trillion transistors.

Thats the very familiar part of the SysMoore Era, of course. The Sys part needs a little explaining, but it is something that we have all been wrestling with in our next platforms. Moores Law improvements of 2X transistor density are taking bigger leaps to stay on track and are not yielding a 2X lowering in the cost of the transistors. This latter bit is what actually drives the semiconductor industry (aside from optimism), and we are now entering a time when the cost of transistors could rise a little with each generation, which is why we are resorting to chiplets and advanced packaging to glue them together side-by-side with 2.5D interposers or stacking them up in 3D fashion with vias or in many cases, a mix of the two approaches. Chiplets are smaller and have higher yield, but there is complexity and cost in the 2.5D and 3D packaging. The consensus, excepting Cerebras, is that this chiplet approach will yield the best tech-onomic results, to use a term from de Geus.

With SysMoore, we are moving from system on chip designs to system of chips designs, illustrated below, to bend up the semiconductor innovation curve that has been dominated by Moores Law for so long (with some help from Dennard scaling until 2000 or so, of course). Like this:

The one thing that is not on the charts that de Geus showed in the keynote, and that we want to inject as an idea, is that compute engines and other kinds of ASICsare definitely going to get more expensive even if the cost of packing up chiplets or building wafer-scale systems does not consume all of the benefits from higher yield that comes from using gangs of smaller chips or adding lots of redundancy into a circuit and never cutting it up.

By necessity, as the industry co-designs hardware and software together to wring the most performance per dollar per watt out of a system, we will move away from the volume economics of mass manufacturing. Up until now, a compute engine or network ASIC might have hundreds of thousands to millions of units, driving up yields over time and driving down manufacturing cost per unit. But in this SysMoore Era, volumes for any given semiconductor complex will go down because they are not general purpose, like the X86 processor in servers and PCs or the Arm system on chip was for smartphones and tablet have both been for the past decade and a half. If volumes per type of device go down by an order of magnitude, and the industry needs to make more types devices, this will put upward pressure on unit costs, too.

So what is the answer to these perplexing dilemmas that the semiconductor industry is facing? Artificial intelligence augmenting human expertise in designing these future system of chips complexes, of course. And it is interesting that the pattern that evolved to create machine learning for data analytics is being repeated in chip design.

EDA is relatively simple conceptually, explains de Geus. If you can capture data, you may be able to model it. If you can model it, maybe you can simulate. If you can simulate, maybe you can analyze. If you can analyze, maybe you can optimize. And if you can optimize, maybe you can automate. Actually, lets not forget the best automation is IP reuse it is the fastest, most efficient kind. Now its interesting to observe this because if you look at the bottom layers, what we have been doing in our field really for 50 years, is we have built digital twins of the thing that we are still building. And if we now say were going to deliver to our customers and the world that 1,000X more capability in chips, the notion of Metaverse some call it Omniverse, Neoverse, whatever you want to call it is becoming extremely powerful because it is a digital view of the world as a simulation of it.

The complexity that comprises a modern chip complex, full of chiplets and packaging, is mind-numbing and the pressure to create the most efficient implementation, across its many possible variations, is what is driving the next level of AI-assisted automation. We are moving from computer-aided design, where a workstation helped a chip designer, to electronic design automation, where synthesis of logic and the placing and routing of that logic and its memories and interconnects, is done by tools such as those supplied by Synopsys, to what we would call AIDA, short for Artificial Intelligence Design Automation, and making us think of Ada Lovelace, of course, the programmer on the Difference Engine from Charles Babbage.

This chart captures the scale of complexity in an interesting way, since the bottom two have been automated by computers IBMs Deep Blue using brute force algorithms to play chess and Googles AlphaGo using AI reinforcement learning to play Go.

Google has been using lessons learned from AlphaGo to do placement and routing of logic blocks on chips, as we reported two years ago from ISSCC 2020, and Synposys is embedding AI in all parts of its tool stack in something it is calling Design Space Optimization, or DSO. A chess match has a large number of possible moves, and Go has orders of magnitude more, but both are win-loss algorithms. Not so for route and placement of logic blocks or the possible ways to glue compute complexes together from myriad parts. These are not zero sum algorithms, but merely better or worse options, like going to the eye doctor and sitting behind that annoying machine with all the blasted lenses.

The possible combinations of logic elements and interconnects is a very large data space, and will itself require an immense amount of computation to add AI to the design stack. The amount has been increasing on a log scale since the first CAD tools became widely used:

But the good news is that the productivity gains from chip design tools have been growing at a log scale, too. Which means what you can do with one person and one workstation designing a chip is amazing here in the 2020s. And will very likely be downright amazing in the 2030s, if the vision of de Geus and his competitors comes to pass.

In the chart above, the Fusion block is significant, says de Geus, and it is implemented in something called the Fusion Compiler in the Synopsys toolchain, and this is the foundation for the next step, which is DSO. Fusion plugs all of these different tools together to share data as designers optimize a chip for power, performance, and area or PPA, in the lingo. These different tools work together, but they also fight, and they can be made to provide more optimal results than using the tools in a serial manner, as this shows:

The data shown above is an average of more than 1,000 chip designs, spanning from 40 nanometers down to 3 nanometers. With DSO, machine learning is embedded in all of the individual elements of the Fusion Compiler, and output from simulations is used to drive machine learning training that in turn is used to drive designs. The way we conceive of this and de Geus did not say this is that the more the Synopsys tools design chips and examine options in the design space, the faster it will learn what works and what does not and the better it will be at showing human chip designers how to push their designs.

Lets show some examples of how the early stages of DSO works with the Synopsys tools, beginning with a real microcontroller from a real customer:

De Geus highlighted the important parts of the design, with a baseline of the prior design and the target of the new design. A team of people were set loose on the problem using the Synopsys tools, and you can see that they beat the customer target on both power and timing by a little bit. Call it a day. But then Synopsys fired up the Fusion Compiler and its DSO AI extensions. Just using the DSO extensions to Fusion pushed the power draw down a lot and to the left a little, and then once AI trained algorithms were kicked on, the power was pushed down even further. You can see the banana curve for the DSO and DSO AI simulations, which allows designers to trade off power and timing on the chip along those curves.

Here is another design run that was done for an actual CPU as it was being designed a year ago:

A team of experts took months to balance out the power leakage versus the timing in the CPU design. The DSO extensions to the Fusion Compiler pushed it way over to the left and down a little, and when the AI trained models of the tool were switched on, a new set of power leakage and timing options were shown to be possible. A single engineer did the DSO design compared to a team using the Synopsys tools, and that single engineer was able to get a design that burned from 9 percent to 13 percent less power and had 30 percent less power leakage with anywhere from 2X to 5X faster time to design completion.

There were many more examples in the keynote of such advances after an injection of AI into the tools. But here is the thing, and de Geus emphasized this a number of times. The cumulative nature of these advances are not additive, but multiplicative. They will amplify much more than the percents of improvement on many different design vectors might imply. But it is more than that, according to de Geus.

The hand that develops the computer on which EDA is written can help develop the next computer to write better EDA, and so on, de Geus explained at the end of his talk. That circle has brought about exponential achievements. So often we say that success is the sum of our efforts. No, its not. It is the product of our efforts. A single zero, and we all sink. Great collaboration, and we all soar.

Continued here:
SysMoore: The Next 10 Years, The Next 1,000X In Performance - The Next Platform

The World’s Shortest List Of Technologies To Watch In 2022 – Forbes

The promise of new technologies bombards us. As a manager, investor, entrepreneur, or innovator, which of these technologies should you monitor closely in 2022? Is it AI and its promise of penetrating more businesses and practices, or should you focus on Web 3.0 and its disruption of Web 2.0?

In this post, I present my thinking and how I came up with the shortest list in the world for technologies to watch in 2022.

Let's take a closer look at which technologies should interest us in 2022

While working on my list of technologies to watch, I stumbled across a research paper by Lei Mi from the Chinese Academy of Sciences in Shannxi, which I found inspirational.

In his paper, Mi introduces the term "Key Core Technologies" that are the "cornerstone to boosting economic and social progress." Mi explains that as technologies continuously evolve, all new technologies are combined and integrated from earlier ones. As technologies upgrade and deliver greater value, we see a dramatic increase in productivity that facilitates rapid social and economic advances that are enabled by a shortlist of "key core technologies."

Mi explains that Key Core Technologies is the "cornerstone of the technical system" and are based on "scientific discovery and technological invention." Most importantly, Key Core Technologies have these three attributes:

In many cases, the Key Core Technologies, with their powerful attributes, kick off new "Technology Waves." For example, we went through the"Connectivity" wavenot too many years ago. Web and Mobile technologies were the "Key Core Technologies" of the day, and companies like Apple and Google were the day's heroes.

With more web pages to browse and mobile devices to carry around, we discovered the value of data and leaped from the "connectivity" to the"data" wave. That's when big data captured our imagination, and companies like Facebook and Netflix served as excellent examples of how big data can benefit client experiences through personalization, for example.

With enough data under our possession, we arrived at the"wave of intelligence,"where we are today. AI and Machine Learning come to our aid to make sense of our data and turn data into insights and insights to actions.

What lies ahead is open to interpretation. Some argue the next big tech wave will not be digital at all. Instead, it will be the age of disruptive technologies such as Synthetic Biology and Nanotech. Others claim we have yet to scratch the surface of AI, that technology enablers such as Augmented and Virtual Reality will take us to the Metaverse. While at the same time, Blockchain and Decentralization will open up the way to Web 3.0.

This is what the full-stack engineer of the future may look like

The exciting thing about the different technology waves, and the Key Core Technologies enabling each wave, is the "vocabulary" of these technologies.

If you think, for example, of the initial wave described above, the wave of connectivity, it had its unique and new vocabulary: cloud computing, touch screens, app's, 3G, or GPS, to name a few terms of the day. The companies that were first to understand this vocabulary how to utilize it to product and venture building are the biggest in the world.

Companies that did not understand the "vocabulary" were pushed back or diminished altogether.

The lesson to learn is that understanding what thecurrent tech waveis, which are theKey Core Technologiesto monitor, and understanding thevocabularyof these waves and technologies is existential to business success. If, for example, there is good reason to believe NFT's will disrupt the gaming industry, and your company plays in this domain and does not understand the unique vocabulary of NFT's - expect troubles on the way. Even if it's just because your competitors can speak fluent "NFTish" and will grab the business opportunities they unlock. Sometimes it's as simple as Nokia CEO's message from hisfarewell speech: "We didn't do anything wrong, but somehow, we lost." So that "somehow" could very well be the illiteracy of the day's technology.

So which Key Core Technologies should we pay attention to in 2022? The list is surprisingly short, which speaks loads to these technologies' Revolutionary, Essentiality, and Leadership powers.

First on the list is AI. It may sound like Artificial Intelligence has been around for a long time, but actually, we are just starting to benefit from this technology's capabilities. The bigAlphaGo neural network breakthroughis just as old as my laptop, six years old. AI is revolutionary in that it is the first time we can train machines to learn independently. Admittedly, for the first few years of deploying AI, 90% of what we did was look for patterns in data. We could do beautiful things by looking for patterns in data, such as personalized experiences or utilizing assets in more beneficial ways.

But now, there are new advancements. For example, withGenerative Adversarial Networks(when two machines can compete with each other to become more accurate in their predictions) orTransformers(a deep learning model that adopts the mechanism of self-attention), we can leap forward to new functionalities.

Here's an example; you can find a trace of these technologies in your Gmail when you type a sentence, AI can complete it for you, or if we push these tech capabilities to the extreme, an autonomous robot in a warehouse can train itself on how to move about safely- whichis impressive because machines are starting to teach themselveshow to complete very complicated tasks.

Amazon Go is still a great example of how IoT and AI come together to provide a better client ... [+] experience

The second core key technology to consider isIoT- and the unique thing about IoT is that it enables us to connect the Physical and Cyber worlds. So what does it mean to join the cyber and physical worlds? Here's an example: Think of an Amazon website. Every item a visitor to the Amazon website "click" links to a unique page; every decision triggers algorithms to help personalize the experience. But when you think about the physical world in comparison, it is mostly not intelligent and not connected, so you and I can double-tap a bottle of wine at the shop all day long, and it will not provide me with any helpful information. Now think about the "amazon go" store, that's the shop where you can pick up a product from any shelf and walk out, as there is no queue to pay.

It's a great example where IoT digitizes the physical world, and products, environments, and spaces can become intelligent and automated. This combination of AI and IoT (also known as AioT) provides us with a new perspective on how people move and interact in physical spaces. In addition, it can seamlessly connect with existing cameras and generate insights using out-of-the-box AI-powered skills. For example, AioT can transform and disrupt commerce and enable fraud detection in the real world. Furthermore, it can help improve employee safety. Finally, we must understand the vocabulary of AioT as it will continue to evolve and impact businesses in the coming years.

Which technologies did not make my Key Core Technologies watch list, but I'll watch anyway? A few have the potential to evolve into KCT in the future and already have the potential to disrupt industries. If you are looking for near-term returns, the biggest challenge with these technologies is how long it will take these technologies to mature and scale. There is enough evidence that once they hit sufficient maturity and scalability level, they will disrupt many industries; however, it is unclear how long this maturity phase will take.

I'm a bit split of AR and VR fall under the definition of "Key Core Technology." For example, one could argue that AR and VR devices combine commoditized software tools and cutting-edge IoT and AI capabilities. In that sense, AR and VR are tail technology of AioT. Even if we go beyond the devices up the tech stack and climb towards platforms and applications layers that provide the complete package of an AR/VR solution, those layers use existing technologies.

However, we need to watch these technologies because of their disruptive potential. I'm not convinced either AR or VR will become "Leading" technologies as Lei Mi defines them. Still, they will create drastically different user experiences and new business models. They deserve to be on the "watch us" list of technologies on these two fronts alone.

So why should we be excited about AR and VR? If we use AI to understand the context of what is happening and use IoT to connect the physical and digital worlds, we can use AR and VR to mesh these worlds together.

We hear more and more about the Metaverse and how we will experience it using AR and VR. AR and VR, or XR, or another name used to describe these capabilities, is "spatial computing." All point in the same direction of new ways to interface with data & collaborate in additive (augmented reality) and immersive (virtual) ways.

As these technologies mature, we should expect some exciting things will happen:

VR did not make the list but is a strong contender

Here the plot thickens even further. First off, some would argue Blockchain is not a "technology" compared to the way AI is a technology. Regardless, Blockchain could becomeRevolutionary, even if what we see to date is more of an evolution of Blockchain and not a revolution.

It's been over a decade since the elusive Satoshi Nakamoto published his (hers?)white paperestablishing a model for Blockchain. Ten years have passed since Laszlo Hanyecz traded his Bitcoins to gettwo pizzasfrom a local pizza store, which may very well be the first-ever Bitcoin trade.

We must ask ourselves a difficult question. Over this decade, did Blockchain disrupt any industry? The short answer is it did not. It did disrupt many people's bank accounts as the volume of trading in crypto and nowNFT'sis constantly growing. To this end, it is hard to claim Blockchain is "Essential" in the same way IoT or AI have become, and it is not "Leading" either. But oh... the potential. Many talented people are backed by smart and not so smart money, focused on making Blockchain reality in contracts, finance, healthcare, and yes, possibly even rebuilding the world wide web to a new Web 3.0 configuration. So no, Blockchain is not a Key Core Technology yet, but if things go as planned (or wished), it will graduate to this level. So let's continue and monitor this "technology" closely.

True, many of us worldwide don't have a stable 5G connectivity, but the midnight oil is already burning in faraway labs where6G is under development. As expected, it will be tremendously faster than 5G; some numbers claim 100X faster. And the reason this matter is because, for the IoT, AI, Blockchain, and AR/VR world we are envisioning, a vehicle to transport data at faster speeds is a must. On the flip side, without the advancements of 5 and 6G, some experiences and business models we hear about will never mature. 5 and 6G are more of a critical enabler than a Key Core Technology. If these faster modes of communication will cover the world, or if an alternative will arise, both will help disrupt multiple markets and industries in collaboration with IoT, AI, AR/VR, and Blockchain.

Now, You might be wondering why there are only two key core technologies?

It just shows the Leading strengths of IoT and AI. these core technologies are so powerful that if we combine IoT and AI, we get digital twins; autonomous vehicles, AmazonGo style shops, and an endless number of other Intelligent and Automated applications.

But there is one other thing that is super interesting about these technologies: they definenew business opportunities.

When these technologies (including their long-tail of sub-technologies) combine,different opportunity areasemerge with unique & fundamentally better ways to solve business problems - ways that were not possible before the emergence of these technologies.

For example, when we combine the power of AI and IoT weachieveintelligent automation: we can shift responsibilities from humans to machines. So we can move from human-led operations to bionic operations. In Bionic environments, AI augments our capabilities; it's like having a little secret helper that whispers in your ear and helps you complete tasks in a better way.

If you used Waze to drive to work this morning, it's one example of how AI augments our ability to get more efficiently from one place to another. At its extreme, Intelligent Automation leads to fully autonomous operations, where humans are entirely out of the loop.

An excellent example of a company that pushedIntelligent Automationto the extreme isOcado, an online supermarket in the UK with one of the most advancedautomated robotic warehouses. This warehouse was not designed for people or to streamline the operation between people and machines. It was designed with a "robots first approach," - and this allowed to build an environment where robots can operate and collaborate autonomously in fulfilling online shopping at an extreme speed and efficiency.

Another business opportunity lives in the intersection of Blockchain & AI that enablesDecentralization and Trust.Once we "decentralize," we can bring trust, security, and traceability into many use cases. An example of Decentralization to bring "trust" to the food we eat and improve supply chains' impact on climate and sustainability isOpenSC, a .org I had the pleasure of supporting during its incubation.

OpenSC uses AI and Machine Learning, IoT sensors, and Blockchain to verify claims about sustainable food production, trace products across supply chains, and share this information with businesses and consumers.Nestle is a recent joiner to OpenSC"to provide consumers with the ability to trace their products right back to their origins." There are other aspects of Decentralization that can benefit businesses beyond the hyped Crypto and NFT opportunities that grab the headlines; companies like OpenSC will bring the benefits of Blockchain to more industries.

Another is the opportunity for Key Core Technologies (and soon to be KCT,) to converge and enableSpatial Creation & Collaboration.These are the more futuristic worlds of AR, VR, Metaverse, and Web 3.0. As I discussed inmy post on Metaverse opportunities, there is still some work before these technologies mature and scale. But we are already seeing initial Proof of Concepts and Minimal Viable Products that hint at their hidden potential. Here's an interesting example:

At the last CES in Las Vegas, Hyundai, which collaborates with Boston Dynamics, (that's the company that makesSpotthe dog-robot and all of the videos of theparkour robotsyou can see on youtube) announced a Metaverse collaboration.

This collaboration will enable users to enter a virtual world representing a different location on Earth or another planet. At the same time, a robot will be present at the physical site that the virtual world is simulating. By bringing the virtual and physical worlds together, the human operator in the virtual world will manipulate objects or machines at the remote physical side. The on-site robot will repeat every movement the human operator makesgenuinely fantastic stuff.

2 other business opportunities worth mentioning:

So we have the core technologies and the broad opportunity areas that they create. Still, to deliver value with these technologies, there is one more thing we need to do, and that is to align desirability, Feasibility, and Viability. Let's unpack what I mean:

We need to align the stars between theFeasibility & maturityof these technologies tothe desirability of use cases and the Viability of business models.

Going through such an exercise can help identify technological maturity, our client needs, and business models. Such an exercise also helps identify opportunities in the near or far term.

But more importantly, it helps us understand which technologies hold the most significant potential for us, so we can keep them on our "important technologies to watch" list.

As promised, a short list of technologies to watch, but hopefully one that will deliver value,

See more here:
The World's Shortest List Of Technologies To Watch In 2022 - Forbes

Opinion: Alpha Phi Alpha develops leaders and promotes brotherhood – The San Diego Union-Tribune

Mitchell was initiated into Alpha Phi Alpha Fraternity Inc. by way of Beta Chapter on the campus of Howard University in 1995, and is president of the local alumni San Diego chapter, Zeta Sigma Lambda Chapter. He lives in La Jolla.

Alpha Phi Alpha Fraternity Inc. was founded on Dec. 4, 1906, on the campus of Cornell University by seven men. Henry Arthur Callis, Charles Henry Chapman, Eugene Kinckle Jones, George Biddle Kelley, Nathaniel Allison Murray, Robert Harold Ogle and Vertner Woodson Tandy dared to be pioneers in an uncharted field of student life.

We provide this platform for community commentary free of charge. Thank you to all the Union-Tribune subscribers whose support makes our journalism possible. If you are not a subscriber, please consider becoming one today.

Alpha Phi Alpha members at the 2020 MLK Parade. The green shirts were handed out to volunteers.

(Courtesy photo)

Our founders, known as the Jewels, went on to become a medical doctor, an educator, an executive secretary of the National Urban League, a civil engineer, an instructor, a secretary attached to the U.S. Senate Appropriations Committee and an architect. These men recognized the need for a strong bond of brotherhood among African Americans. Their success in establishing a fraternity created the framework for the creation of other African American Greek letter organizations.

Alpha Phi Alpha Fraternity Inc. develops leaders, promoting brotherhood and academic excellence while providing service and advocacy for our communities. Part of our legacy is our membership. One of our most famous members is Dr. Martin Luther King Jr., who was initiated in 1952 as a graduate student at Boston University working on his doctorate in systematic theology with an interest in philosophy and ethics.

As we celebrate his birthday each January, we are reminded that Dr. Kings journey was not alone his leadership abilities were developed by members of our fraternity, and brotherhood was promoted as he took the first of many steps on his journey to provide service and advocacy to our American community.

Locally, through collaboration with Zeta Sigma Lambda Foundation, we celebrate Martin Luther King Jr. in San Diego with an annual parade on Harbor Drive. Each parade has a theme, and is full of dazzling floats, bands, drill teams, colleges, fraternities, sororities, churches and community organizations.

During this parade, we also honor our military and our police and fire departments. The parade has evolved over time and now represents the diverse community here in San Diego. It is something that lends itself to how we celebrate him nationally with his own holiday and memorial.

The MLK memorial in Washington, D.C., was established by Alpha Phi Alpha Fraternity Inc. in 1996. Forty years earlier, Dr. King had been honored by Alpha Phi Alpha Fraternity Inc. with the Alpha Award of Honor for Christian leadership in the cause of first-class citizenship for all mankind.

It is in this spirit that we, the local chapter of Alpha Phi Alpha Fraternity Inc., decided to contribute to protecting our community by postponing the parade until 2023. We are thankful for the continued collaboration with San Diego County and look forward to returning in 2023 to the same route celebrating the diversity of our community.

Our fraternity works to provide service to our community through national and local programs. Our national programs are Project Alpha, Go-To-High-School, Go-To-College and A Voteless People Is A Hopeless People. Our local programs, along with the MLK parade, are the San Diego Multicultural Festival and our Holiday Scholarship Ball. The Multicultural Festival celebrates the diversity reflected throughout San Diego. This event is planned for April 24. This past December, we celebrated the Holiday Scholarship Ball, which assists with our scholarship fundraising.

Project Alpha was developed collaboratively with the March of Dimes to educate African Americans on the consequences of teenage pregnancy from the male perspective. This program assists young men in developing an understanding of their role in preventing untimely pregnancies and sexually transmitted infections through responsible attitudes and behaviors.

Our Go-To-High-School, Go-To-College programs focus on the importance of completing secondary and collegiate education as a road to advancement. We believe school completion is the single best predictor of future economic success. Locally, we partner with middle and high schools to mentor African American young men towards this goal. Through our fundraising efforts, we provide scholarships for college-bound students.

Our A Voteless People Is A Hopeless People program focuses on political awareness, empowerment and the importance of voting. Our local chapters program will consist of a no-touch voter registration drive on Saturday from 9 a.m. to 12 p.m. at 312 Euclid Avenue in San Diego. We invite members of the community to come out and safely confirm their voting registration in preparation for the coming elections.As we navigate our new normal, Alpha Phi Alpha Fraternity Inc. will reaffirm our commitment to social justice, community advocacy, economic development and mobility, education and health-care equity.

View original post here:
Opinion: Alpha Phi Alpha develops leaders and promotes brotherhood - The San Diego Union-Tribune

Altos bursts out of stealth with $3B, a dream team C-suite and a wildly ambitious plan to reverse disease – FierceBiotech

Altos Labs just redefined big in biotech. Where to start? The $3 billion in investor support? The C-suite staffed by storied leadersBarron, Bishop, Klausneridentifiable by one name? Or the wildly ambitious plan to reverse disease for patients of any age? Altos is all that and more.

Early details of Altos leaked out last year when MIT Technology Review reported Jeff Bezos had invested to support development of technology that could revitalize entire animal bodies, ultimately prolonging human life. The official reveal fleshes out the vision and grounds the technology in the context of the nearer-term opportunities it presents to improve human health.

It's clear from work by Shinya Yamanaka, and many others since his initial discoveries, that cells have the ability to rejuvenate, resetting their epigenetic clocks and erasing damage from a myriad of stressors. These insights, combined with major advances in a number of transformative technologies, inspired Altos to reimagine medical treatments where reversing disease for patients of any age is possible, Hal Barron, M.D., said in a statement.

Barron is set to take up the CEO post when he leaves GlaxoSmithKline in August, completing a C-suite staffed by some of the biggest names in life sciences. The former Genentech executive will join Rick Klausner, M.D., and Hans Bishop at the top of Altos. Klausner, co-founder of companies including Juno Therapeutics and Grail, is taking up the chief scientific officer post. Bishop, who used to run Juno and Grail, is Altos president. The leadership team is rounded out by Chief Operating Officer Ann Lee-Karlon, Ph.D., formerly of Genentech.

RELATED: Barron quits GSK to take CEO post at $3B biotech startup

The team will use $3 billion in capital committed by investors including Arch Venture Partners to try to turn breakthroughs in our understanding of cellular rejuvenation into transformational medicines. That effort will build on the work of a galaxy of academic scientists Altos has brought under its umbrella.

Aiming to integrate the best features of academia and industry, the startup is setting up Altos Institutes of Science in San Francisco, San Diego and Cambridge, U.K. Juan Carlos Izpisua Belmonte, Ph.D., Wolf Reik, M.D., and Peter Walter, Ph.D., will lead the three institutes, overseeing the work of a current roster of almost 20 principal investigators across the sites. The scientific leadership team also features Thore Graepel, Ph.D., co-inventor of AI breakthrough AlphaGo, and Shinya Yamanaka, M.D., Ph.D., a Nobel laureate who gives Altos ties to Japan.

Klausner, who founded Altos with Bishop, and his colleagues brought the scientists together and created a board of directors that features luminaries such as CRISPR pioneer Jennifer Doudna, Ph.D., and fellow Nobel laureates Frances Arnold, Ph.D., and David Baltimore, Ph.D., to help bring cellular rejuvenation out of academic labs and into clinical development.

Altos seeks to decipher the pathways of cellular rejuvenation programming to create a completely new approach to medicine, one based on the emerging concepts of cellular health, Klausner said. Remarkable work over the last few years beginning to quantify cellular health and the mechanisms behind that, coupled with the ability to effectively and safely reprogram cells and tissues via rejuvenation pathways, opens this new vista into the medicine of the future.

Follow this link:
Altos bursts out of stealth with $3B, a dream team C-suite and a wildly ambitious plan to reverse disease - FierceBiotech

DeepMind’s David Silver on games, beauty, and AI’s potential to avert human-made disasters – Bulletin of the Atomic Scientists

DeepMinds David Silver speaks to the Bulletin of the Atomic Scientists about games, beauty, and AIs potential to avert human-made disasters. Photo provided by David Silver and used with permission.

David Silver thinks games are the key to creativity. After competing in national Scrabble competitions as a kid, he went on to study at Cambridge and co-found a video game company. Later, after earning his PhD in artificial intelligence, he led the DeepMind team that developed AlphaGothe first program to beat a world champion at the ancient Chinese game of go. But he isnt driven by competitiveness.

Thats because for Silver, now a principal research scientist at DeepMind and computer science professor at University College London, games are playgrounds in which to understand how mindshuman and artificiallearn on their own to achieve goals.

Silvers programs use deep neural networksmachine learning algorithms inspired by the brains structure and functionto achieve results that resemble human intuition and creativity. First, he provided the program with information about what humans would do in various positions for it to imitate, a learning style known as supervised learning. Eventually, he let the program learn by playing itself, known as reinforcement learning.

Then, during a pivotal match between AlphaGo and the world champion, he had an epiphany: Perhaps the machine should have no human influence at all. That idea became AlphaGo Zero, the successor to AlphaGo that received zero human knowledge about how to play well. Instead, AlphaGo Zero relies only on the games rules and reinforcement learning. It beat AlphaGo 100 games to zero.

I first met Silver at the Heidelberg Laureate Foruman invitation-only gathering of the most exceptional mathematicians and computer scientists of their generations. In Heidelberg, he was recognized for having received the Association for Computing Machinerys prestigious Prize in Computing for breakthrough advances in computer game-playing.

Few other researchers have generated as much excitement in the AI field as David Silver, Association for Computing Machinery President Cherri M. Pancake said at the time. His insights into deep reinforcement learning are already being applied in areas such as improving the efficiency of the UKs power grid, reducing power consumption at Googles data centers, and planning the trajectories of space probes for the European Space Agency. Silver is also an elected Fellow of the Royal Society and was the first recipient of the Mensa Foundation Prize for the best scientific discovery in the field of artificial intelligence.

Silvers stardom contrasts with his quiet, unassuming nature. In this condensed, edited, from-the-heart interview, I talk with Silver about games, the meaning of creativity, and AIs potential to avert disasters such as climate change, human-made pathogens, mass poverty, and environmental catastrophe.

As a kid, did you play games differently from other kids?

I had some funny moments playing in National School Scrabble competitions. In one event, at the end of the final game, I asked my opponent, Are you sure you want to play that? Why not play this other word which scores more points? He changed his move and won the game and championship, which made me really happy.

More than winning, I am fascinated with what it means to play a game really well.

How did you translate that love of games into a real job?

Later on, I played junior chess, where I met [fellow DeepMind co-founder] Demis Hassabis. At that time, he was the strongest boy chess player of his age in the world. He would turn up in my local town when he needed pocket money, play in these tournaments, win the 50-pound prize money, and then go back home. Later, we got to know each other at Cambridge and together we set up Elixir, our games company. Now were back together at DeepMind.

What did this fascination with games teach you about problem solving?

Humans want to believe that weve got this special capacity called creativity that our algorithms dont or wont have. Its a fallacy.

Weve already seen the beginnings of creativity in our AIs. There was a moment in the second game of the [2016] AlphaGo match [against world champion Lee Sodol] where it played a particular move called move 37. The go community certainly felt that this was creative. It tried something new which didnt come from examples of what would normally be done there.

But is that the same kind of broad creativity that humans can apply to anything, rather than just moves within a game?

The whole process of trial-and-error learning, of trying to figure out for yourself, or asking AI to figure out for itself, how to solve the problem is a process of creativity. You or the AI start off not knowing anything. Then you or it discover one new thing, one creative leap, one new pattern or one new idea that helps in achieving the goal a little bit better than before. And now you have this new way of playing your game, solving your puzzle, or interacting with people. The process is a million mini discoveries, one after the other. It is the essence of creativity.

If our algorithms arent creative, theyll get stuck. They need an ability to try out new ideas for themselvesideas that were not providing. That has to be the direction of future research, to keep pushing on systems that can do that for themselves.

If we can crack [how self-learning systems achieve goals], its more powerful than writing a system that just plays go. Because then well have an ability to learn to solve a problem that can be applied to many situations.

Many thought that computers could only ever play go at the level of human amateurs. Did you ever doubt your ability to make progress?

When I arrived in South Korea [for the 2016 AlphaGo match] and saw row upon row of cameras set up to watch and heard how many people [over 200 million] were watching online, I thought, Hang on, is this really going to work? It was scary. The world champion is unbelievably versatile and creative in his ability to probe the program for weaknesses. He would try everything in an attempt to push the program into weird situations that dont normally occur.

I feel lucky that we stood up to that test. That spectacular and terrifying experience led me to reflect. I stepped back and asked, Can we go back to the basics to understand what it means for a system to truly learn for itself? To find something purer, we threw away the human knowledge that had gone into it and came up with AlphaZero.

Humans have developed well-known strategies for go over millennia. What did you think as AlphaZero quickly discovered, and rejected, these in favor of novel approaches?

We set up board positions where the original version of AlphaGo had made mistakes. We thought if we could find a new version that gets them right, wed make progress. At first, we made massive progress, but then it appeared to stop. We thought it wasnt getting 20 or 30 positions right.

Fan Hui, the professional player [and European champion] we were working with, spent hours studying the moves. Eventually, he said that the professional players were wrong in these positions and AlphaZero was right. It found solutions that made him reassess what was in the category of being a mistake. I realized that we had an ability to overturn what humans thought was standard knowledge.

After go, you moved on to a program that mastered StarCrafta real-time strategy video game. Why the jump to video games?

Go is one narrow domain. Extending from that to the human brains breadth of capabilities requires a huge number of steps. Were trying to add any dimensions of complexity where humans can do things, but our agents cant.

AlphaStar moves toward things which are more naturalistic. Like human vision, the system only gets to look at a certain part of the map. Its not like playing go or chess where you see all of your opponents pieces. You see nearby information and have to scout to acquire information. These aspects bring it closer to what happens in the real world.

Whats the end goal?

I think its AI agents that are as broadly capable as human brains. We dont know how to get there yet but we have a proof of existence in the human brain.

Replicating the human brain? Do you really think thats realistic?

I dont believe in magical, mystical explanations of the brain. At some level, the human brain is an algorithm which takes inputs and produces outputs in a powerful and general way. Were limited by our ability to understand and build AIs, but that understanding is growing fast. Today we have systems that are able to crack narrow domains like go. Weve also got language models which can understand and produce compelling language. Were building things one challenge at a time.

So, you think theres no ceiling to what AI can do?

Were just at the beginning. Imagine if you run evolution for another 4 billion years. Where would we end up? Maybe we would have much more sophisticated intelligences which could do a much better job. I see AI a little bit like that. There is no limit to this process because the world is essentially infinitely complex.

And so, is there a limit? At some point, you hit physical limits, so its not that there are no bounds. Eventually you use up all of the energy in the universe and all of the atoms in the universe in building your computational device. But relative to where we are now, thats essentially limitless intelligence. The spectrum beyond human intelligence is vast, and thats an exciting thought.

Stephen Hawking, who served on the Bulletins Board of Sponsors, worried about unintended consequences of machine intelligence. Do you share his concern?

I worry about the unintended consequences of human intelligence, such as climate change, human-made pathogens, mass poverty, and environmental catastrophe. The quest for AI should result in new technology, greater understanding, and smarter decision making. AI may one day become our greatest tool in averting such disasters. However, we should proceed cautiously and establish clear rules prohibiting unacceptable uses of AI, such as banning the development of autonomous weapons.

Youve had many successes meeting these grand challenges through games, but have there been any disappointments?

Well, supervised learningthis idea that you learn from exampleshas had an enormous mainstream impact. Most of the big applications that come out of Google use supervised learning somewhere in the system. Machine translation systems from English to French, for example, in which you want to know the right translation of a particular sentence, are trained by supervised learning. It is a very well understood problem and weve got clear machinery now that is effective at scaling up.

One of my disappointments at the moment is that we havent yet seen that level of impact with self-learning systems through reinforcement learning. In the future, Id love to see self-learning systems which are interacting with people, in virtual worlds, in ways that are really achieving our goals. For example, a digital assistant thats learning for itself the best way to accomplish your goals. That would be a beautiful accomplishment.

What kinds of goals?

Maybe we dont need to say. Maybe its more like we pat our AI on the back every time it does something we like, and it learns to maximize the number of pats on the back it gets and, in doing so, achieves all kinds of goals for us, enriching our lives and helping us doing things better. But we are far from this.

Do you have a personal goal for your work?

During the AlphaGo match with Lee Sedol, I went outside and found a go player in tears. I thought he was sad about how things were going, but he wasnt. In this domain in which he had invested so much, AlphaGo was playing moves he hadnt realized were possible. Those moves brought him a profound sense of beauty.

Im not enough of a go player to appreciate that at the level he could. However, we should strive to build intelligence where we all get a sense of that.

If you look aroundnot just in the human world but in the animal worldthere are amazing examples of intelligence. Im drawn to say, We built something thats adding to that spectrum of intelligence. We should do this not because of what it does or how it helps us, but because intelligence is a beautiful thing.

See the article here:
DeepMind's David Silver on games, beauty, and AI's potential to avert human-made disasters - Bulletin of the Atomic Scientists