Archive for the ‘Artificial General Intelligence’ Category

Tesla FSD v12 Rolls Out to Employees With Update 2023.38.10 … – Not a Tesla App

November 24, 2023

By Kevin Armstrong

Elon Musk announced earlier this month that Tesla's Full Self-Driving (FSD) v12 would be released in two weeks. The usual timeframe reference Musk is famous for was met with skepticism. However, it seems that Tesla is right on track with its rollout.

We have learned through a trusted source that FSD v12 has started rolling out internally with Tesla update 2023.38.10.

Update: Musk has responded to our article on X, confirming that Tesla has indeed starting rolling out FSD v12 to employees.

FSD v12 is the update that is expected to remove "beta" from the title. The initial rollout to employees appears more limited in scale than previous updates. Considering the magnitude of the changes in this version, it makes sense to start slow.

The timing of this internal release is close to two major Tesla events. The Cybertruck delivery event is just a few days away. Many eyes will be on the company during the event, allowing Tesla to possibly show the world its latest breakthrough. Alternatively, the highly anticipated holiday update, often regarded as the best update of the year, is expected to be released by 'Santa Musk' in the coming weeks, potentially featuring v12 as a significant addition.

The latest public FSD build, v11.4.7.3, is Tesla update 2023.27.7. This FSD build is several revisions behind the latest production builds, so it's nice to see that v12 will bring FSD beta testers back up to speed with some of the latest Tesla features such as Predictive Charger Availability, Faster Hazard Lights After a Crash, and other features included in updates 2023.32 and 2023.38.

As for FSD improvements, we haven't had a chance to see the release notes for FSD v12 yet. However, now that it has started going out to employees, it shouldn't be long before we find out all the FSD improvements included in this milestone release.

A significant change in v12 is eliminating over 300,000 lines of code previously governing FSD functions that controlled the vehicle, replaced by further reliance on neural networks. This transition means the system reduces its dependency on hard-coded programming. Instead, FSD v12 is using neural networks to control steering, acceleration, and braking for the first time. Up until now, neural networks have been limited to detecting objects and determining their attributes, but v12 will be the first time Tesla starts using neural networks for vehicle control.

The FSD v12 represents a significant leap in Tesla's FSD technology. Musk has described it as an "end-to-end AI", employing a "photon in, controls out" approach akin to human optical processing. This analogy underscores Tesla's ambition to replicate human-like decision-making capabilities in its vehicles.

Labeled as a "Baby AGI" (Artificial General Intelligence), the system is designed to perceive and understand the complexities of the real world. This philosophical and technological shift in AI-driven autonomy was vividly showcased during a live-streamed drive by Musk through Palo Alto, where the Model S demonstrated smooth and almost flawless navigation through various real-world scenarios, including construction zones, roundabouts, and traffic. That was three months ago; imagine how much the system has learned in 90 days.

The rollout of FSD v12 marks a critical point in Tesla's journey in AI and autonomous driving. It's not just about technological prowess but also about aligning AI with nuanced human behavior. With Musk's continued focus on AI, which is evident across his ventures, Tesla remains a crucial player in the EV market and the broader AI revolution.

As we await further details on the public release of FSD v12 and its potential showcase at the Cybertruck event, it's clear that Tesla is moving closer to a future where cars are not just self-driving but are also intelligent and responsive to the complexities of the real world.

Subscribe to our newsletter to stay up to date on the latest Tesla news, upcoming features and software updates.

By Kevin Armstrong

Tesla's highly anticipated Cybertruck is gracing showrooms nationwide. Cybertruck was trending on X as users posted pictures and videos from Tesla stores throughout the U.S., ramping up even more excitement for the delivery event on November 30th.

Cybertruck started its showroom appearances in San Diego and San Jose earlier this week, but according to Elon Musk, several more Tesla stores may want to clear some space. Musk posted on X: "Cybertrucks are on their way to Tesla stores in North America!" It's unclear if that means every Tesla store and gallery across North America or just a few. There are 236 stores in the U.S., 24 in Canada, and 3 in Mexico.

It's also strange that so many Cybertrucks are in showrooms, as it's been reported that Tesla Product Design Director Javier Verdura said only ten would be delivered at the November 30th event. It's believed that slow rollout highlights the company's cautious approach, ensuring quality control before increasing deliveries and production volumes.

'A Better Theater,' a popular site for Tesla owners to stream content in their vehicles, is tracking all showrooms which have the Cybertruck on display. We've added the list below, but for the latest locations, checkout their site.

860 Washington St., New York, NY 10014

333 Santana Row, San Jose, CA 95128

6692 Auto Center Dr, Buena Park, CA 90621

4545 La Jolla Village Dr, San Diego, CA 92122

Bellevue, WA 98004 (Coming Soon)

2223 N Westshore Blvd, Tampa, FL 33607

4039 NE 1st Ave, Miami, FL 33137

9140 E Independence Blvd, Matthews, NC 28105

901 N Rush St, Chicago, IL 60611

This widespread showcase in Tesla showrooms is more than just about displaying the new Cybertruck; it's a strategic move to draw consumers into showrooms. As Cybertrucks make their way into more stores, potential customers and enthusiasts get a firsthand look, creating a tangible sense of excitement. This strategy is particularly effective before Black Friday, leveraging the shopping season's foot traffic to draw more attention.

Adding to the intrigue, Tesla has revealed key specifications of the Cybertruck in its showrooms. The confirmed towing capacity of 11,000 lbs and a payload of 2,500 lbs have been significant talking points, giving potential buyers more reasons to consider the Cybertruck as a formidable competitor in the electric vehicle market. However, we still don't know the price.

Despite the initially limited delivery numbers, Tesla's decision to place Cybertrucks in showrooms across North America is another clever marketing move - for a company that doesn't advertise. It maintains high levels of interest and anticipation and gives the rest of the lineup a chance to shine. Christmas comes earlier this year; just a few more sleeps until November 30th.

By Kevin Armstrong

Tesla's incredible journey started by piecing together the Roadster, a painstaking ordeal that nearly caused the company to go bankrupt more than once. The piece-by-piece instruction manual to build the car that started an automotive revolution has been made public, fully open-sourced. CEO Elon Musk posted on X: "All design & engineering of the original @Tesla Roadster is now fully open source. Whatever we have, you now have."

The open-source announcement has sparked enthusiasm and curiosity within the engineering community. A post from the World of Engineering (@engineers_feed) on X, asking, "Does this mean I can build my own roadster in my garage?" garnered a direct response from Musk: "* some assembly required."

Theoretically, if one can get their hands on the parts, they have some direction to build one of these historic vehicles. From a business side, this kind of information sharing with competitors is curious, although it does follow Tesla's mission statement to accelerate the world's transition to sustainable energy. Although the information is 15 years old, it could provide some useful information.

Tesla has clarified the nature of the information released, stating it's a resource for Roadster enthusiasts derived from the car's R&D phase. The details are not intended for manufacturing, repair, or maintenance and may not align with final production models. Users leveraging this information are reminded of their responsibility to adhere to legal and safety protocols, as Tesla offers no warranties for work done using these details. This open-source initiative encourages innovation but stresses the importance of safety and legal compliance.

Launched in 2008, the original Roadster was the first legal electric vehicle on highways to utilize lithium-ion batteries and achieve over 200 miles per charge. It bankrolled the next phase of Tesla, the Model S, and set a benchmark for future EVs.

While this open-source initiative revisits Tesla's past, it also shifts the focus back to the next-generation Roadster. Initially unveiled in 2017, its production has been delayed, and there is no timeline for when the new sportscars will be manufactured. Moreover, Tesla's focus on the Cybertruck and a more affordable $25,000 EV indicates a strategic balance between innovation and mass EV adoption.

Tesla's decision to make the original Roadster's design and engineering open source should not be too surprising. Musk has said, "I don't care about patents. Patents are for the weak. They don't actually help advance things. They just stop others from following you." Perhaps the biggest surprise is how long it took for Musk to open-source the Roadster blueprint.

Visit link:

Tesla FSD v12 Rolls Out to Employees With Update 2023.38.10 ... - Not a Tesla App

Searching AI-powered ChatGpt for HNP authors, the Great Salt … – The Daily Herald

Photo:AI Image Generation from text prompt: Lasana Sekou writer fist raised portrait. (deepai.org, 11.21.23)

By Lasana M. Sekou

In early September 2023, ChatGpt August 3 Version at chat.openai.com was asked by Offshore Editing Services (OES) to identify writers published at House of Nehesi Publishers (HNP), an indie press in St. Martin, Caribbean.

When asked about itself, the artificial intelligence (AI)-powered/generative chatbot said, You can use it to ask questions, get information, seek advice, or engage in natural language conversation on a wide range of topics.

For the information search, 22 writers published at HNP were selected: world famous authors; first-time authors; and three upcoming authors, two of which had also been published by other houses. Only the writers first and last names were inputted to generate the information from ChatGpt.

Results

ChatGpt had no information for 13 authors and the first sentence of its response for each was: Im sorry, but I dont have specific information about an individual named (Name of writer) in my knowledge base, which goes up until September 2021.

The information provided by ChatGpt about four HNP authors was generally correct. The answers for five writers ranged from correct, incorrect, to clueless: including wrong birthplace or date; attributing names of books by the author that were not written by the author; identifying fictitious names of books by the writer, and unable to name any book by the writer.

Whether hailed as the first glimmers on the horizon of artificial general intelligence (Noam Chomsky et al.) or railed against as the certain threat to content creators and researchers, even thinkers, ChatGpt, and like advanced machine learning models (e.g. Googles Bard, Microsofts Sydney), is pre-trained and draws chiefly from swaths of online or digital data to answer questions put to it by anyone searching and asking for information online.

Since the September search by OES, the Default (GPT-3.5) was upgraded to the ChatGPT September 25 Version. On October 4, the day a version of this article was posted at an HNP Facebook page, a sampling of five of the 13 authors with no information in early September was re-searched before the social media posting.

Only one of the previously non-identified authors in the sampling was found to have information available in October. According to ChatGPT, she had passed away on June 6, 1996. As of this writing, our dear writer is alive and well.

The openai.com maintains the same disclaimer of sorts: Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts.

OpenAI arguably warns that, ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers, according to a BBC article of December 7, 2022.

Libraries closed and closing? Imagine primary school children and high schoolers navigating AI-powered/generative chatbots from their computer and other digital devices. Searching for information for their homework assignments about writers, artists, and other aspect of culture ... especially if they belong to cultures historically subject to erasure attempts, marginalization, and what author Toni Morrison has called Oppressive language.

A Bit Beyond Looking Up Writers

Generated of late from the multibillion-dollar AI industry, TV scripts, school essays and resumes are written by bots that sound a lot like a human.

Artificial intelligence is changing our lives from education and politics to art and healthcare. The AI industry continues to develop at rapid pace, said an article at NPR.org on May 25, 2023.

But today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism, opined Noam Chomsky et al. in The New York Times of March 8, 2023.

No need to fear AI bots and models. It will take human intelligence to rise to the ites and to raise from the depths what will best [be]come of artificial intelligence.

By the way, ChatGPT got guavaberry pretty good. Its Great Salt Pond information did not mention Great Bay, Philipsburg, or St. Martin [regardless of the spelling of the islands name]. As for Reparations, the AI gets an A for the information generated at ChatGPT (on the last date checked).

Read more:

Searching AI-powered ChatGpt for HNP authors, the Great Salt ... - The Daily Herald

Unveiling the Mechanics of AI: How Artificial Intelligence Works – Medium

Photo by Mojahid Mottakin on Unsplash

In the era of rapid technological advancements, perhaps none has captured the imagination quite like Artificial Intelligence (AI). Its the science fiction dream turned reality, where machines seem to mimic human intelligence, making decisions, learning from experience, and performing tasks that were once exclusive to human minds. But how does AI actually work? What powers these digital minds that are reshaping industries and revolutionizing our daily lives?

The Essence of AI: Learning from Data

At its core, AI operates through a process known as machine learning. Imagine teaching a computer to perform a task by showing it numerous examples. The computer learns from these examples and begins to identify patterns within the data. Its akin to how a child learns to differentiate between different animals by looking at pictures the more examples they see, the better they become at distinguishing between them.

Training and Algorithms

This learning process begins with training data. Lets take an example: teaching a computer to recognize cats in pictures. A machine learning algorithm, like a set of instructions, processes this data, seeking out patterns that define a cat the shape of the ears, the contours of the face, and so on. The algorithm then uses these patterns to classify new, unseen images as either containing a cat or not.

But this is just the beginning. As the algorithm processes more data, it refines its understanding of what makes a cat, adjusting its internal parameters to become increasingly accurate. This iterative process is what enables AI to improve over time the more data its exposed to, the better it becomes at making accurate predictions or classifications.

Types of Machine Learning

There are a few flavors of machine learning that you might have heard of:

Deep Learning: Unleashing the Neural Networks

Deep Learning is a subset of machine learning that has gained immense popularity due to its effectiveness in solving complex tasks. Its inspired by the structure of the human brain neural networks. These networks consist of layers of interconnected nodes, or neurons. Each neuron processes information and passes it on to the next layer. The deep in deep learning comes from the fact that these networks can be quite deep, with many layers.

This structure enables deep learning models to automatically learn hierarchical representations of data. Its used for tasks like image and speech recognition, language processing, and even playing strategic games like chess.

The Power of Data and Compute

The success of AI, particularly deep learning, hinges on two key elements: data and computing power. The more diverse and extensive the data, the better an AI model can learn and generalize. Similarly, complex tasks require significant computing resources to process and analyze the data. This is why AI breakthroughs in recent years have often been accompanied by advancements in data collection methods and improvements in processing capabilities.

The Future of AI

As AI technology continues to evolve, were moving beyond narrowly-focused applications to more general AI, or artificial general intelligence (AGI). AGI would possess human-like cognitive abilities, enabling it to perform a wide range of tasks with adaptability and creativity.

While were not quite there yet, the strides made in AI technology are undoubtedly transforming industries and reshaping the way we interact with technology. The intricate dance between data, algorithms, and computing power is what fuels this technological revolution, and the journey is only beginning.

To conclude, artificial Intelligence is not a singular magic trick but rather a symphony of data, algorithms, and computing power. Its a journey of teaching machines to learn from examples and make informed decisions. As we unlock more of AIs potential, it will continue to reshape our world, making tasks more efficient, solving complex problems, and pushing the boundaries of what machines can achieve.

Read the rest here:

Unveiling the Mechanics of AI: How Artificial Intelligence Works - Medium

The stakes are high so are the rewards: Artificial intelligence and … – Building

Disruptive technologies, of which artificial intelligence (AI) is currently a frontrunner, can be such charming underminers of our own certainty, irrevocably pushing us outside our comfort zone and forcing us to rethink all that we have been taking for granted. They are, in many ways, the white rabbit of our industry. And, like Alice in Wonderland, we can choose to ignore them. Or we can go down the rabbit hole.

Down the rabbit hole there is a great deal of uncertainty, unknowns and failure. But there are also a lot of possibilities. I prefer the hell of chaos to the hell of order, writes Wislawa Szymborska in her homonymous poem, and nothing is truer when it comes to disruption.

It is indeed within the creative chaos and outside of our comfort zones that we can reinvent paradigms. It is only by pushing our own boundaries that change is possible. And it is only by embracing change that we can bring new ideas to the table. But change can have unpredictable consequences. Think of the smartphone: a technology nobody knew that they needed and now not only everyone uses but it has also reshaped the way people interact by establishing the rise of social media.

AI is the superset of techniques powered by machine learning (ML) that enable machines to mimic human behaviour

So, what will the consequences of disruptive technologies, like AI, be in the Architecture Engineering Construction and Operation (AECO) industry? Before we discuss the particulars of AI in our industry, let us first place it in a wider framework.

In broad terms, AI is the superset of techniques powered by machine learning (ML) that enable machines to mimic human behaviour. You may have also heard of artificial general intelligence (AGI), or God-like AI, a computer system capable of generating new scientific knowledge and performing tasks as a human would.

The major, or rather more publicised, dramatis personae in the race towards AGI are DeepMind and OpenAI. DeepMind was acquired by Google for over half a million dollars in 2014. Since then, it has beaten the Go world champion and solved one of biologys greatest unsolved problems. OpenAI, on the other hand, started as a non-profit competitor of DeepMind, before pivoting to be for-profit after Microsofts $1bn investment in 2019.

Of course, there are also other rising stars in the game, such as Anthropic, Cohere and Stability AI, who are all invested in developing closed or open-sourced Large Language Models (LLMs): natural language processing (NLP) programs trained through neural networks to perform various text completion tasks, having been given a well-crafted prompt.

The number of players in the field is steadily growing as the allure of these systems increases exponentially along with the promise of the positive change they will bring. However, it is important to note that the biggest player in this game is us. It is the data that we produce, the assets we create and the actions we take that train these systems.

Every single action in our life is underscored by the production of data that is being mined and used in a variety of ways

Data has become the currency of modern society. It is, in many ways, the most abundantly generated product of the 21st century. Every single action in our life is underscored by the production of data that is being mined and used in a variety of ways.

From identifying spam emails and suggesting what song to listen to, to defining our banking credit profile and making shopping recommendations, data and AI are guiding decision-making everywhere. Thousands of companies are leveraging the power of data through machine learning.

Aside from data, there are two more things that drive the rise of AI: the rate of the adoption of the technology and the rise in computational power.

The rate of AI adoption is incredible. According to McKinseys 2022 report, it has more than doubled between 2017 to 2022 (and these numbers are predicted to be higher this year).

Similarly, the computational power used to train AI has increased by a factor of 100 million in the past 10 years. According to the authors of Compute Trends Across Three Eras of Machine Learning, before 2010, the compute power we used to train AI grew in line with Moores Law (roughly doubling every 20 months). It is now doubling every six months. It is so powerful that it is allowing us to feed an unprecedented amount of data to LLMs in order to train them.

Through these developments, we have seen the rise of unbelievable image and video manipulation capabilities, unlike anything that we thought possible a few years ago. Interestingly, the exponential growth of AI has meant that, within the span of only a couple of years, we went from the enthusiasm and occasional apprehension of Generative Adversarial Network (GAN) image and video manipulation (who hasnt seen a deep-fake online?) to the exciting new advancements of NLP content creation.

Large Language Models have truly pulled us deep down into the rabbit hole: generative pre-trained transformers (the GPT on the Chat-GPT) are rewriting the book on IP, productivity and some say creativity. Their rate of development is beyond anything anyone could have imagined.

ChatGPT-3.5 was launched in March 2022, primarily as a text-based tool. It passed the bar exams at the bottom 10th percentile. A year later, ChatGPT-4 was able to understand images and passed the bar at the top 10th percentile. Microsoft is using the power of these models to launch tools such as github Copilot for developers and, in the near future, 365 Copilot for the general public. It is likely we will use these tools to boost our productivity of everyday tasks.

These models have the ability to describe images, understand context and make suggestions based on said context, bringing forth the rise of diffusion models. The underlying advances that drove the rise of LLMs were also pushing multi-modal models with abilities and qualities that had never been seen before, resulting in the widespread use of applications such as MidJourney and Stability AIs DreamStudio.

These models are using natural language prompts to produce incredibly intricate images and, in our case, architectural images in any style we so choose. The quality of these images is so good that there are anecdotes of small offices winning entire competitions through images created via diffusion models, and then struggling to deliver on the promise that the AI system has imagined.

In fact, there is an entirely new concept around AI and NLP called prompt engineering. This describes the ability to craft your prompt in such a way that it instructs the system to produce images which directly match the prompters expectations.

It is effectively the art of crafting sentences which describe what you have imagined and are closer to the computers understanding of what you want to see. It could be a trade that could in the future make artists and architects compete with language craftsmen and writers.

Uncertainties and unknowns

So, the question naturally arises: will AI replace architects and other key construction professionals? Before we answer that, there are some bigger perhaps more existential threats to consider.

Only a couple of months ago Sam Altman, CEO of OpenAI, talked to the US Senate and was calling for regulations around AI. This is because scientists fear that artificial general intelligence will bring about the singularity: a hypothetical future when technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilisation.

Every year, surveys from people in the AI field, consistently place this scenario before the year 2060. This latent threat has created a very polarised environment, with half of the community thinking that AI poses an apocalyptic risk, while the other half believing these concerns are exaggerated and disruptive. I believe that, as with everything, the truth is somewhere in between. The grey areas are, after all, so much more interesting.

There are efforts to mitigate the existential risk that AI poses. You may have read of AI alignment, that aims to align AI systems goals with human values, or even AI safety and fairness, which reviews how these systems safely and fairly respond to what we ask of them.

We need to intensify our work in these areas, as AI is not going anywhere. The stakes are simply too high. It is a race driven by the promise of posterity and money. AI start-ups were once making tens of millions; today they make tens of billions.

But, whether you are a fan of Peter Pan or Battlestar Galactica, you know that all of this has happened before, and it will happen again. Many have compared the revolutionary changes brought by AI to the splitting of the atom.

However, creative minds have nothing to fear from the advent of technological revolution. The truth is that, if we play our cards right, AI is not going to replace us, but rather augment our creativity and problem-solving capabilities.

The question is how to channel this technology to become a creative assistant that augments rather than replaces our creativity

To do this, we need to take a step back and really think about how AI can impact us in a positive way. The question is how to channel this technology to become a creative assistant that augments rather than replaces our creativity.

The first step is to identify all of the things that AI is good for (automation, augmentation, facilitation) and find real business cases that could positively impact our current workflows. Experimenting with diffusion models and prompt engineering is undeniably alluring, as we can produce incredibly exciting outputs with minimum effort. However, we need to understand the business case behind this and how it can enhance our current workflows.

According to my colleague Sherif Tarabishy, associate partner and the design systems analyst specialising in AI/ML for the Applied R+D group at Foster + Partners: Deploying and monitoring machine learning models in a production environment is complex. This is why it is more beneficial to create a simple model that addresses a core business need, rather than focusing on complex, state-of-the-art models that dont have a clear use case.

Involving domain experts across the business early on is very important to identify and evaluate those impactful business use cases. They provide critical context and an understanding of the problem domain. They also help to identify the best data governance framework for a use-case, managing availability, usability, integrity and security of the business data.

>>Read more Building the Future Commission articles on artificial intelligence

That is to say that there are many areas beyond image creation where AI and ML are expected to impact the field of construction. Generative design, surrogate models, design assist models, knowledge dissemination and even business insights are just some of the areas where AI and ML can be used in our pipelines.

At Foster + Partners, we have been developing services and tools along these lines. For example, since 2019, the Applied R+D group has been publishing research around design-assist and surrogate models. Design-assist models could become collaborators in our every-day design tasks, solving difficult problems in real-time. They could suggest options as we design, automate processes or provide answers to tasks that would otherwise be time consuming and labour intensive.

These are actual problems that AI-powered design assist tools can solve for us, by automating mundane tasks and, by extension, turbo-powering productivity and allowing more time for creative tasks to take place

For example, imagine that we have a thermoactive, passively actuated, laminate material that deforms based on varied thermal conditions. As designers, we are interested in the start and end point of the deformation (how it may look at rest and how I would like it to look when heated) that is what defines our design.

In order to control that, we would need to know how to control the laminate layering, a process that requires non-linear analysis and quite a lot of time. Or, we could use ML to train a system to predict the laminate layering and give real-time feedback to the designer.

This was exactly the task that we identified in 2019 with Marcin Kosicki, associate partner in the Applied R+D group, and our Autodesk collaborators Panagiotis Michalatos and Amira Abdel-Rahman. How can we use ML to predict how a passively actuated material would react to variable temperature changes?

With the help of Hydra, our bespoke in-house distributed computing and optimisation system, we ran thousands of simulations to understand how thermoactivated laminates behave under varied heat conditions. We then used that data to train a deep neural network to tell us what the laminate layering should be, given a particular deformation that we required.

These are actual problems that AI-powered design assist tools can solve for us, by automating mundane tasks and, by extension, turbo-powering productivity and allowing more time for creative tasks to take place.

There is currently a plethora of new third-party software developed to yield the power of optimisation and ML to provide design assist solutions to architects and contractors. From floorplate layout and massing exploration to drawing automation and delivery, these products are looking at how these techniques can be used to either provide quick design explorations during concept or automation of tasks and processes during delivery.

Diffusion models are also perceived as design assist models, as they can go way beyond image making and prompt engineering. Imagine creating 3D models and getting suggestions on detailed visualisations as the massing is changed, directly on your CAD viewport.

To that end, we have created a Rhino plug-in using our in-house ML inference API, which makes this possible. It allows us to quickly deploy and experiment with new ML models, with impressive results.

What is interesting about the use of diffusion models in this case is that we do not ask the computer to imagine a design for us. Instead, we provide the design, and we prompt for design-assist suggestions on the look and feel. The best part is that we can train the model on our own data, thereby any suggestions are inspired by our own designs and historical data.

Surrogate models can also be incredibly useful for providing real-time feedback and saving designers a lot of time. A surrogate model is an ML model which is trained to predict the outcome of what would otherwise be an analytical process.

That means that, instead of creating a model and running an analysis for example, daylight potential we could train a surrogate model to predict with high accuracy what the daylight potential of the massing is, without us having to export the model, run the analysis in a different software (with the interoperability challenges that this may entail) and then export the results back.

There are two components which allow us to do this: 1) high volumes of data to train the model with and 2) willingness to compromise. The former is true for any ML-trained model. The latter applies particularly to surrogate models which are meant to give a prediction rather than an accurate result.

If the data is rich enough and the model is properly trained, these analytical predictions can reach up to 90-95% accuracy. In any case, the user is asked to sacrifice absolute accuracy for real-time speed a compromise most of us would be happy to make during the early design stages of a project, to make the right decisions earlier on.

We have been developing such models for analyses like visual and spatial connectivity since the early 2020s. With these models, we are replacing slower analytical processes with much faster and very accurate predictions. To make them accessible to everyone, we have developed a Rhino-ML plug-in that could be used by any architect at the practice.

Knowledge dissemination is another major aspect of AI. Imagine all the data and therefore the knowledge that an AEC practice is producing every day. A lot of that becomes untapped information saved on servers.

What if we had the chance to make all this knowledge accessible to everyone? What would it mean for a young designer to be able to ask questions that only a seasoned architect could have an answer to?

That is where the power of AI LLMs come in. With the push for LLMs and even the new focus on foundation models that have zero-shot capabilities (ie the ability to perform a set task by being presented with a couple of examples of what the user is after) now more than ever the focus is on your own data. The trick, then, is choosing the right ML model and making sure the data used is appropriately curated.

Still, this proposition should be treated with care. Some pre-trained models are prone to AI hallucinations, providing results that are not accurate, because they are trained to improvise when they do not find direct answers to questions they have been asked. This can be problematic as it means the accuracy of the response received is compromised, which can have a detrimental effect on results, depending on the context.

On the other hand, even if the model is trained not to improvise, its answers are still going to be only as good as the data it has been trained on. This puts the onus on each practice to ensure that their data has been properly curated. At Foster + Partners, we have been developing applications such as this, using different models and experimenting with how they can be deployed office wide. Many other architectural offices are following similar routes.

AI can streamline the construction process by optimising schedules, predicting project costs and improving safety

Finally, the other use of AI is around business insights an application already used in every other industry. Building AI-powered applications around business data and developing predictive models to help visualise and contextualise operational data, while helping to gauge the financial aspects of a business, should be a straightforward proposition for any practice subject to the amount of good quality data that they have.

AI is already making its mark not only on design but also during construction, operation and beyond. It can streamline the construction process by optimising schedules, predicting project costs and improving safety. But lets face it: when we think of disruption in construction, we think of robots!

Robots in construction are not a new thing, but AI-powered robots such as those from Built Robotics, can really change the construction landscape. Additionally, anyone who has seen what Boston Dynamics Atlas robot (which is not actually relying on ML or AI) is capable of should have no doubt that the future efficiency of robots will be an incredible asset on-site.

The capabilities that Atlas is presenting, will allow us to fast-track production, minimise on-site risks and automate tasks in a way that augments rather than replaces people. As robots evolve, we will see that the evolution of our symbiotic relationship will depend on mutual interactions. It is the rules of engagement with them that we need to start working on something that Asimov foresaw more than 80 years ago. However, the use of AI in construction and even operation (through the use of smart buildings and digital twins) is a wider discussion for another time.

Handling data

It is obvious that AI holds tremendous potential for transforming the way we design, construct, and experience buildings and urban spaces. But to harness AIs power, we need to be able to control what powers it: data! Our first point of order should be understanding how to collect, organise and process our data across disciplines in a meaningful manner, so that can we leverage it.

Our datasets are growing exponentially: we produce more data than ever before during the design, construction and operation of the built environment. This is because we have also been taking advantage of the exponential growth of computational power.

By using the power of GPU computing or distributed computing both technologies that sprung from the games and film industries we are now capable of producing huge amounts of data, not in a matter of days or months, but in hours. This data includes thousands of solutions for projects of all scales, but also tens of thousands of analytical results that tell us how well these solutions perform.

Could we use these rich datasets to train the computer to predict optimal spatial configurations, not in a matter of hours, but within seconds? The answer is yes.

We could take advantage of the amounts of data each project yields in order to increase performance, efficiency and creativity. Many start-up companies are doing it already, by using large datasets to train their applications to provide suggestions during the design process. These AI-based generative models are going to become increasingly more prominent and once more yield results that are only as good as the data they have been trained on.

To conclude, a more general question should probably be this: Will disruption replace creativity? This need not be the case. While these technologies can assist and enhance various aspects of the architectural process the role of the architect remains crucial, as it brings to the table not only creativity and innovation, but also aesthetics, collaboration, communication and ethics coupled with responsibility.

To ensure that disruptive technologies augment rather than replace our creativity, we need to set up rules of engagement, similar to Asimovs 3 laws of robotics. These will need to span data contextualisation, regulatory frameworks, IP, education, embedded biases in data and ethical considerations. There are a lot of challenges here and, in many ways, these challenges are even more crucial than the AI-based applications we are going to develop for the industry.

To begin with, data is key. How are we going to suitably train AI systems for the AECO, when the industry lacks appropriate schemas that may deliver consistent tagged building datasets that are contextualised, socially appropriate, structurally viable, sustainability sensitive and even building-code complying?

If we want to use and control these technologies to the best of our ability, we need to learn to control the data that drives them first. This is not an easy ask, but some companies are already taking up the challenge.

There is still a lot of work to be done around AI alignment, AI safety and fairness and AI ethics

The EU has certainly started building regulatory frameworks around proper data governance. One such example is data pooling of smaller companies for AI uses, which will ensure ringfencing of their IP rights while allowing them to be competitive against bigger players.

However, there is still a lot of work to be done around AI alignment, AI safety and fairness and AI ethics, particularly in relation to embedded unconscious biases in the data we use to train these systems. At practice level, data fidelity is even more important to ensure the quality and consistency of the outcome, whether one is deploying in-house trained ML models or fine-tuning pre-trained ones.

Following on from that, IP is going to be another interesting challenge. Currently, artists all over the world are taking part in class action lawsuits against AI-based software providers contesting the use of their own proprietary data to train systems, which have become a direct competitor to their own livelihoods.

For over three months, more than 11,000 members of the Writers Guild of America have been on strike, asking for assurances that AI will not take their roles as scriptwriters. And even as I write, actors from the Screen Actors Guild - American Federation of Television and Radio Artists are on strike, contesting the use of AI to create actors likenesses virtually and without any humans involved. To have both guilds (writers and actors) on strike simultaneously has not happened for 60 years.

These fuzzy IP boundaries when it comes to AI are derived from how data is used and the lack of any kind of robust legal or regulatory frameworks around the use of these technologies in any industry. And, to top that, there are also several ethical considerations that have already surfaced through the use of AI, mainly revolving around reinforcement of bias and discrimination. There are many such documented cases spanning from recruitment to criminal justice, and we can only be certain that our industrys data will have embedded biases too.

These are big questions and require decisive action at policy level. Hopefully, we can push these discussions sooner rather than later, as the time for pre-emptive action is already behind us.

Finally, we cannot shy away from the role that education has to play in the way these new technologies are adopted and implemented by the industry. There are already courses which focus on how AI applications could be used for architectural purposes.

This, in and of itself, is not a bad thing: we should be embracing disruptive tech and trying to understand how it will impact our industry. But the educational framework for this to happen properly is not there yet simply by virtue of how fast this discussion is moving. The bottom line is that, if educators objective is augmentation rather than replacement of creativity (which is in many cases the by-product of the fascination these technologies hold), we are yet to write ourselves outside of the equation.

The answer to how AI is going to affect our profession is going to depend on us and how actively we try to channel this incredible technological advancement for the improvement of the AECO. We do live in exciting times, and we need to seize this moment by driving change rather than being swept away by it. To do that, we must be active participants in bringing forward the change we want to see.

We are already riding a wave of unprecedented exponential acceleration the stakes are too high, but so can be the rewards.

Martha Tsigkari will be speaking at theBuilding the Future Commission Conferencein Westminster on 27 September. You can joinus tohear from some of the leading figures across the construction industry and find out more about the work of the commission.

The day will include panel debates on net zero, digital transformation and building safety as well as talks from other high-profile speakers on future trends and ideas that could transform the sector.

There will also be the chance to feed in your ideas to the commission and to network with other industry professionals keen to share knowledge.

You can follow our progress using #BuildingTheFuture on social media.

Read the original:

The stakes are high so are the rewards: Artificial intelligence and ... - Building

What will AI do to question-based inquiry? (opinion) – Inside Higher Ed

Twemoji (question mark image) and Just_Super from Getty Images Signature (AI photograph).

Since the release of ChatGPT in late 2022, many questions have been raised about the impact of generative artificial intelligence on higher education, particularly its potential to automate the processes of research and writing. Will ChatGPT end the college essay or prompt professors, as John Warner hopes, to revise our pedagogical ends in assigning writing? At Washington College, our Cromwell Center for Teaching and Learning organized a series of discussions this past spring motivated by questions: What is machine learning doing in education? How might we define its use in the classroom? How should we value it in our programs and address it in our policies? True to the heuristic nature of inquiry in the liberal arts and sciences, this series generated robust but unfinished conversations that elicited some initial answers and many more questions.

And yet, as we continue to raise important questions about AI while adapting to it, surprisingly few questions have been asked of AI, literally. I have come to notice that the dominant grammatical mood in which AI chatbot conversations are conducted or prompted is the imperative. As emphasized by the new conductors of prompt engineering, the skillful eliciting of output from the AI model that has emerged as a lucrative career opportunity, chatbots respond best to explicit commands. The best way to ask AI a question, it seems, is to stop asking it questions.

Writing in The New York Times On Tech: AI newsletter, Brian X. Chen defines golden prompts as the art of asking questions that will generate the most helpful answers. However, Chens prompts are all commands (such as act as if you are an expert in X), no interrogatives, and not even a please recommended for the new art of computational conversation. Nearly every recommendation I have seen from AI developers perpetuates this drifting of question-based inquiry into blunt command. Consider prominent AI adopter and Wharton School professor Ethan Mollick. Observing the tendency of students to get poor results from chatbot inquiry because they ask detailed questions, Mollick proposes a simple solution. Instead of guiding or instructing the chatbot with questions, Mollick writes, tell it what you want it to do and, a point made through an unnerving analogy, boss it like you would an intern.

Most Popular

Why should it matter that our newest writing and research technologies are rapidly shifting the modes and moods of inquiry from interrogatives to imperatives? Surely many seeking information from an internet search no longer phrase inquiry as a question. But I would agree with Janet H. Murray that new digital environments for AI-assisted inquiry do not merely add to existing modes of research, but instead establish new expressive forms with different characteristics, new affordances and constraints. First among these for Murray, writing in Hamlet on the Holodeck (MIT Press, 1998), is the procedural or algorithmic basis of digital communication. A problem-solving procedure, an algorithm follows precise rules and processes that result in a specific answer, a predictable and executable outcome.

Algorithmic procedure might provide a beneficial substructure for fictional narrative, driving a reader (or player, in the case of a video game) toward the resolution of a complex and highly determined plot. But algorithmic rules could also pose a substantial constraint for students learning to write an essay, where more open-ended heuristics, or brief, general rules of thumb and adaptive commonplaces, are more appropriate for composition that aims for context-contingent persuasion, plausibility not certainty.

Drawing on lessons from cognitive psychology, educator Mike Rose long ago addressed the problem of writers block in these very terms of algorithm and heuristic. Process and procedure are necessary for writing, but when writing is presented algorithmically, as a rigid set of rules to execute, developing writers can become cognitively blocked. Perhaps you remember, as I do, struggling to reconcile initial attempts at drafting an essay with a lengthy, detailed Harvard outline worked out entirely in advance. Roses seminal advice from 1980, that educators present learning prompts more heuristically and less absolutely, remains timely and appropriate for the new algorithms of AI.

In turning questions into commands, while still referring to them as questions, we perpetuate cognitive blocking while inducing, apparently, intellectual idiocy. (Ask better questions by not asking them?) We transform key rhetorical figures of inquiry like question and conversation into dead metaphor. Consider what is happening to the word prompt. Students know the word, at least for now, as a term of art in writing pedagogy: the guidelines for an assignment in which instructors identify the purpose, context and audience for the writing, preparing the grounds for the type of question-based inquiry the students will be pursuing. In The Craft of Research (University of Chicago Press), the late Wayne Booth and his colleagues refer to these heuristic guidelines as helping students make rhetorically significant choices.

Reaching back to classical rhetoric, heuristics such as Aristotles topics of invention or the four questions of stasis theory provide adaptive and responsive ideas and structures toward possible responses, not determined answers. When motivating questions are displaced by commands, AI-generated inquiry risks rhetorical unresponsiveness. When answers to unasked questions are removed from audience and context, the opaque information retrieved is no longer in need of a writer. The user can command not just the answer but also its arrangement, style and delivery. Since inquiry is offloaded to AI, why not the entire composition?

As educators we should worry, along with Nicholas Carr in The Glass Cage (W.W. Norton, 2014), about the cognitive de-skilling that attends the automation of intellectual inquiry. Writing before ChatGPT, Carr was already thinking about the ways that algorithmic grading programs might drift into algorithmic writing and thinking. As it becomes more efficient to pursue question-based inquiry without asking questions, we potentially lose more than the skill of posing questions. We potentially lose the means and the motivation for the inquiry. It is hard to be curious about ideas when information can be commanded.

As we continue to raise questions about AI, we need not resist all things algorithm. After all, we have been working and teaching with rule-based procedures long before the computer. But we can choose, as educators, to use emerging algorithmic tools more heuristically and with more rhetorically significant purpose. Rhetorically speaking, the best heuristics are simple concepts that can be applied to interrogate and sort through complex ideas, adapting prior knowledge to new contexts: What is X? Who values it? How might X be viewed from alternative perspectives? Such is inquiry, which, like education, can be guided but hardly commanded. If we are going to use AI tools to find and shape answers to our questions, we should generate and pose the questions.

Sean Ross Meehan, Ph.D., is a professor of English and director of writing and co-director of the Cromwell Center for Teaching and Learning at Washington College.

Original post:

What will AI do to question-based inquiry? (opinion) - Inside Higher Ed