Archive for the ‘Artificial General Intelligence’ Category

Anthropic and SK Telecom team up to build AI model for telcos – Tech Monitor

AI lab Anthropic is working with South Koreas largest telecom company SK Telecom on a new global telecommunication-focused large language model (LLM). The partnership also saw SKT invest $100m in the US-based developer.

Anthropic is one of a handful of AI labs focused on building foundation models, chatbots and working towards next generation artificial general intelligence (AGI). Founded by former OpenAI executives, it released a new version of its Claude AI model last month.

Google is one of the biggest investors in Anthropic, extending its investment in May by joining a $450m funding round in the start-up. This round also included investment from Salesforce Ventures, Zoom Ventures and Spark Capital bringing Anthropics value to $4.1bn.

The new funding from SK Telecom builds on its own investment as part of that earlier Series C funding round, but the company has not confirmed the value of its total stake in Anthropic.

The agreement will see the AI lab fine-tune Claude, with support from SK, to ensure it meets the needs of global telecoms companies. This will include adding specific skills for customer service, marketing, sales and consumer application targeted to the telecom industry use case.

Jared Kaplan, co-founder and chief science officer for Anthropic will lead the projects, including setting the product roadmap and direction of customisation. It will then be available on a new Telco AI Platform in development by the Global Telco AI Alliance. This is a partnership between networks including SKT, Deutsche Telekom, e& and Singtel announced earlier this year.

SKT has incredible ambitions to use AI to transform the telco industry, said Dario Amodei, co-founder and CEO of Anthropic. Were excited to combine our AI expertise with SKTs industry knowledge to build a LLM that is customised for telcos.

The Global Telco AI Alliance confirmed the new AI platform last month to serve as the core foundation for new AI services. This includes creation of digital assistants, improving existing telecom services and new super apps to offer a range of AI-powered services.

Each of the companies involved have appointed a c-suite level representative to coordinate the overall collaboration. This will include working out ways to utilise generative AI within the sector and work across an open-vendor approach to technology.

Ryu Young-sang, CEO of SK Telecom said the two companies would work to promote global AI innovation. By combining our Korean language-based LLM with Anthropics strong AI capabilities, we expect to create synergy and gain leadership in the AI ecosystem together with our global telco partners.

Outside of this alliance companies like Vodafone and BT are also actively exploring ways to utilise generative AI. Vodafone is using it with IoT devices to monitor and reduce network emissions in the UK. They are using their own in-house big data analytics platform, alongside 11,500 UK radio base stations to look for consumption anomalies.

More here:

Anthropic and SK Telecom team up to build AI model for telcos - Tech Monitor

Derry City & Strabane – Explore the Future of Education and … – Derry City and Strabane District Council

15 August 2023

GenAIEdu 2023, National Conference on Generative Artificial Intelligence in Education will take place at the Derry ~ Londonderry campus of Ulster University from the 11th-13th September 2023.

The conference, hosted by the School of Computing, Engineering and Intelligent Systems, will explore the cutting-edge world of Generative Artificial Intelligence in an educational context.

Whether you are an educator, researcher, teacher, student or industry professional, this conference is your gateway to understanding how generative AI is revolutionising the way we learn, teach and assess.

You will learn about cutting-edge technologies and large language models such as ChatGPT, Bard and Claude and how these tools and natural language processing capabilities enable personalized and interactive learning experiences through a series of keynotes, talks, discussion panels and hands on workshops, demonstrations and networking events with leading academics, researchers and industry experts in this area.

Michael Callaghan, Conference chair and Reader in the School said:

Generative AI offers unprecedented opportunities and challenges for transformation in education which we must navigate carefully. The GenAIEdu conference will explore the impact of Generative AI on students, and the evolving role of educators and institutions in this technologically enriched and rapidly evolving landscape.

Professor Jim Harkin, Head of the School of Computing, Engineering, and Intelligent Systems, added:

The School is delighted to host this conference and explore the use of Generative AI both in education and society in general. We are uniquely positioned on the Derry~Londonderry campus to be at the forefront of shaping the next generation of computing and AI graduates with our offering of undergraduate degree courses in Computer Science and Artificial Intelligence.

Conference registration

More details on registration, conference hotel rates, practicalities of travel and the content of the talks are available on the conference website here.

https://www.ulster.ac.uk/conference/genaiedu-2023

Register now as places are limited.

https://store.ulster.ac.uk/product-catalogue/faculty-of-computing-engineering/school-of-computing-and-intelligent-systems/genaiedu-2023-conference-registration

All registration queries should be sent to [emailprotected] with the email subject heading GenAIEdu Registration Query.

Follow us on Twitter/X

https://twitter.com/GenAiEdu/

Confirmed speakers and presenters include

Sue Attewell - Co-lead the National centre for AI at JISC

https://www.linkedin.com/in/sueattewell/

Dr Cris Bloomfield - Education Architect Microsoft

https://www.linkedin.com/in/crispinbloomfield/

Michael Callaghan - Reader, Ulster University

https://www.linkedin.com/in/michael-callaghan-48977316/

Manjinder Kainth - CEO & Co-founder of Graide

https://www.linkedin.com/in/manjinderkainth/

Peter Kilcoyne - TeacherMatic, Director at Transform Education

https://www.linkedin.com/in/peter-kilcoyne-87aa7541/

Martin Neale - Founder and CEO ICS, UK's first Microsoft AI Inner Circle Partner

https://www.linkedin.com/in/martinneale/

JJ Quinlan - Lecturer and Researcher for Creative Media DkIT

https://www.linkedin.com/in/jj-quinlan-a8695a1b/

Emil Reisser-Weston - Open eLMS Edtech Futurist and Managing Director

https://www.linkedin.com/in/emilrw/

Professor Mairad Pratschke - Professor and Chair in Digital Education, University of Manchester

https://www.linkedin.com/in/maireadpratschke/

Dr Muskaan Singh Lecturer at Ulster University

https://www.linkedin.com/in/muskaan-singh-73b316197/

View post:

Derry City & Strabane - Explore the Future of Education and ... - Derry City and Strabane District Council

Ethical Considerations of Using AI for Academic Purposes – Unite.AI

AI-driven services are revolutionizing numerous sectors, and academia is no exception. But as with any groundbreaking technology, there are ethical considerations to ponder. Why is this discussion vital? Because our approach to education shapes future generations.

At its core, an AI-driven essay service leverages artificial intelligence to craft, enhance, or check essays. These services can offer a range of features, including but not limited to:

Some advanced AI tools can generate entire essays based on given prompts or topics.

AI-driven services can detect and correct grammatical errors, punctuation mistakes, and awkward phrasings in an essay, often more quickly and accurately than traditional spell-checkers. Some AI tools can evaluate the style and tone of an essay, providing feedback on whether the content is formal, informal, positive, negative, or neutral. These services can also suggest improvements in terms of vocabulary, sentence structure, and coherence.

By comparing the content of an essay with vast databases of existing content, these services can identify potential instances of plagiarism.

Some AI tools can help students gather relevant information or data related to their essay topic, streamlining the research process.

With the rapid incorporation of AI in the educational sector, AI essay writer services have become increasingly available.

In today's fast-paced academic environment, every moment counts. Students juggle multiple assignments, extracurricular activities, and personal commitments. AI steps in as a powerful ally, streamlining tasks and cutting down the time spent on repetitive or cumbersome processes. By handling tasks like research, grammar checks, and basic content suggestions, AI tools allow students to manage their time more effectively, focusing on deeper understanding and creativity.

One of the significant advantages of integrating AI in the academic realm is the enhancement of the learning experience. By pinpointing specific weaknesses in students' work, AI-driven tools provide a clear roadmap for improvement. Students can zero in on those areas that truly require attention, ensuring that their efforts are channeled effectively.

The traditional academic feedback loop, which often involves long waits and general comments, is undergoing a revolution thanks to AI. No longer do students have to wait weeks to understand where they went wrong. With instant critiques available at their fingertips, learning becomes a dynamic and swift process. This immediacy not only boosts student engagement but also facilitates rapid iteration and understanding.

When power is amplified by AI's capabilities, the ethical dimensions surrounding its use become even more critical. Here is the crux of the matter: with great power comes great responsibility.

One of the most pressing concerns is the authenticity of the work produced by AI essay writers. If a student submits an essay primarily generated by an AI tool, can we truly say it is the student's original work? This blurring of boundaries between human effort and machine output challenges our traditional understanding of authorship and originality. It raises the question: Are we inadvertently promoting a culture where the process of thinking, analyzing, and creating is outsourced to machines?

Also, the age of AI presents a nuanced form of the age-old problem of plagiarism. Even if AI tools can generate unique content, the shadow of doubt regarding its originality persists. It is not just about lifting content from existing sources; it is about the genesis of the idea itself. And even if technically non-plagiarized, does it uphold the spirit of academic integrity?

While AI has shown remarkable proficiency in various tasks, its reliability remains a topic of debate. Machines operate based on algorithms and data, which might not always capture the nuances and complexities of human thought. Relying solely on AI's judgment could lead to misconceptions and inaccuracies.

Today, data has become the new gold, and so there are plenty of data privacy concerns. As students increasingly turn to online AI tools for academic assistance, they often share personal information, essays, and research. But at what cost? There are growing concerns about how this data is stored, who has access to it, and its potential misuse. Are students inadvertently compromising their privacy in exchange for the convenience AI-driven services offer?

In the realm of academia, the essence of learning is not just about obtaining information but about the originality of thought and the ability to innovate. There is no denying that AI possesses the capability to generate vast amounts of content, often mimicking human-like patterns of writing. However, while it can replicate, it does not necessarily innovate in the way humans do. The human mind draws from experiences, emotions, culture, and a myriad of other factors that AI simply does not possess for now. The nuance, the serendipity, and the sheer unpredictability of human creativity are challenging, if not impossible, for AI to emulate completely. Can a machine truly capture the essence of a eureka moment or the thrill of an unexpected connection?

The primary goal of education, fostered over the years by human essay writers, is not just knowledge accumulation but holistic personal and intellectual development. There is a risk of students becoming passive recipients rather than active learners. By leaning heavily on AI, they might miss out on challenges, mistakes, and subsequent learnings that are instrumental in growth. In bypassing the struggles, are we also bypassing the most significant opportunities for intellectual and personal growth?

Education's cornerstone is the development of critical thinking and analytical skills. However, an over-reliance on AI poses the risk of students outsourcing this crucial aspect of their education. When a machine is tasked with generating content, structuring arguments, or even conducting research, students may find themselves sidestepping the very processes that hone their cognitive abilities. In the long run, is it going to do more harm than good by depriving students of opportunities to think deeply and critically?

The quest for knowledge is as much about the journey as it is about the destination. But when tools like AI offer shortcuts, there is a temptation to skip the learning journey altogether. The adage Easy come, easy go perfectly encapsulates this problem; what is achieved without effort might be lost just as quickly.

The advent of technology in classrooms has undeniably reshaped educational relationships. When it is not just a tool but an AI-driven entity intervening, the roles can change profoundly. The role of educators is undergoing a transformation. Instead of being the primary source of information, educators might find themselves transitioning to the role of mentors. Their primary function may shift from direct teaching to guiding, facilitating, and fostering an environment where students can critically engage with AI-generated content.

As to feedback, it is not just about pointing out mistakes. It is about fostering growth with a human touch. When feedback stems from AI, it might be precise and instant, but it often lacks the nuances and empathy a human educator provides. This potential absence of personal connection can impact the depth and quality of a student's personal and academic growth.

The rapid integration of AI into the educational sector might present challenges, but it is important to remember that every challenge is an opportunity in disguise. By approaching AI's incorporation with forethought and responsibility, we can ensure it becomes a boon rather than a bane.

One of the most immediate steps educational institutions can take is the establishment of clear policies and guidelines regarding AI usage. By setting boundaries on how and when AI tools should be employed, institutions can ensure that the technology is used to complement human educators rather than replace them. This can also safeguard academic integrity and ensure that the essence of learning is not compromised.

In addition, by investing in comprehensive training programs for both educators and students, institutions can reduce the potential for misuse and misunderstanding. Educators can be trained on how to best integrate AI tools into their teaching methodologies, and students can be educated on the ethical considerations and best practices for using AI in their learning processes. Through proper education, we can strike the right balance, harnessing the immense potential of AI while preserving the invaluable human touch in the realm of education.

The intersection of AI and academia is fraught with both promise and pitfalls. While the allure of AI-driven essay writing is undeniable, it is vital to navigate this terrain with a moral compass. The future of education hinges not just on technology but how we choose to wield it.

See original here:

Ethical Considerations of Using AI for Academic Purposes - Unite.AI

Elon Musk says Tesla cars now have a mind, figured out ‘some aspects of AGI’ – Electrek

Elon Musk claims that Tesla may have figured out some aspects of AGI as he believes that Tesla vehicles now have a mind.

The CEO has said several times that he believes most of Teslas value is attached to self-driving, ad he says Tesla could achieve it by the end of the year.

The Tesla community is divided between believers who think the automaker is indeed about to deliver on its long-stated promise, and people who have been burned too many times by missed timelines and think a robotaxi service from Tesla is still years away.

Thats why we are tracking the effort really closely and see if theres any chance Tesla can make Musks prediction true with just a few months left in the year.

On X (formerly Twitter), Musk often shines a spotlight on some of those true believers who only show the good performance of Teslas FSD Beta. This week, he commented on one of those by claiming that he believes Tesla have figured out some aspects of AGI:

I think we may have figured out some aspects of AGI. The car has a mind. Not an enormous mind, but a mind nonetheless.

AGI stands for artificial general intelligence. Musk has said that he believes Tesla might play a role in achieving AGI through its self-driving program.

Unlike some other self-driving programs, Tesla relies heavily on camera-based vision and neural nets to power its system. The company believes that this approach is closer to how human drives and could be transferred to other autonomous products, like its Optimus robot.

The guy may not be wrong. In my last review of FSD Beta, I noted that it drives like a first-time 14-year-old driver who sometimes does hard drugs.

I also noted that while this might sound like an insult to Teslas system, I wouldnt know the first thing about making a car drive autonomously at the level of a 14-year-old driver who sometimes does hard drugs. Therefore, I believe its an achievement in itself.

Now does it mean that Tesla cars have a mind equivalent to a 14-year-old who sometimes does hard drugs while driving? Probably not, but I can see his point.

If you have been following my reporting on FSD, you know that Im not the most optimistic about the program. However, I have some hope that updating the vehicle control with new neural nets and the new computing power that comes with the Dojo supercomputer could greatly accelerate the pace of improvements.

AGI, though? Im skeptical but open-minded.

FTC: We use income earning auto affiliate links. More.

The rest is here:

Elon Musk says Tesla cars now have a mind, figured out 'some aspects of AGI' - Electrek

To Navigate the Age of AI, the World Needs a New Turing Test – WIRED

There was a time in the not too distant pastsay, nine months agowhen the Turing test seemed like a pretty stringent detector of machine intelligence. Chances are youre familiar with how it works: Human judges hold text conversations with two hidden interlocutors, one human and one computer, and try to determine which is which. If the computer manages to fool at least 30 percent of the judges, it passes the test and is pronounced capable of thought.

For 70 years, it was hard to imagine how a computer could pass the test without possessing what AI researchers now call artificial general intelligence, the entire range of human intellectual capacities. Then along came large language models such as GPT and Bard, and the Turing test suddenly began seeming strangely outmoded. OK, sure, a casual user today might admit with a shrug, GPT-4 might very well pass a Turing test if you asked it to impersonate a human. But so what? LLMs lack long-term memory, the capacity to form relationships, and a litany of other human capabilities. They clearly have some way to go before were ready to start befriending them, hiring them, and electing them to public office.

And yeah, maybe the test does feel a little empty now. But it was never merely a pass/fail benchmark. Its creator, Alan Turing, a gay man sentenced in his time to chemical castration, based his test on an ethos of radical inclusivity: The gap between genuine intelligence and a fully convincing imitation of intelligence is only as wide as our own prejudice. When a computer provokes real human responses in usengaging our intellect, our amazement, our gratitude, our empathy, even our fearthat is more than empty mimicry.

So maybe we need a new test: the Actual Alan Turing Test. Bring the historical Alan Turing, father of modern computinga tall, fit, somewhat awkward man with straight dark hair, loved by colleagues for his childlike curiosity and playful humor, personally responsible for saving an estimated 14 million lives in World War II by cracking the Nazi Enigma code, subsequently persecuted so severely by England for his homosexuality that it may have led to his suicideinto a comfortable laboratory room with an open MacBook sitting on the desk. Explain that what he sees before him is merely an enormously glorified incarnation of what is now widely known by computer scientists as a Turing machine. Give him a second or two to really take that in, maybe offering a word of thanks for completely transforming our world. Then hand him a stack of research papers on artificial neural networks and LLMs, give him access to GPTs source code, open up a ChatGPT prompt windowor, better yet, a Bing-before-all-the-sanitizing windowand set him loose.

Imagine Alan Turing initiating a light conversation about long-distance running, World War II historiography, and the theory of computation. Imagine him seeing the realization of all his wildest, most ridiculed speculations scrolling with uncanny speed down the screen. Imagine him asking GPT to solve elementary calculus problems, to infer what human beings might be thinking in various real-world scenarios, to explore complex moral dilemmas, to offer marital counseling and legal advice and an argument for the possibility of machine consciousnessskills which, you inform Turing, have all emerged spontaneously in GPT without any explicit direction by its creators. Imagine him experiencing that little cognitive-emotional lurch that so many of us have now felt: Hello, other mind.

A thinker as deep as Turing would not be blind to GPTs limitations. As a victim of profound homophobia, he would probably be alert to the dangers of implicit bias encoded in GPTs training data. It would be apparent to him that despite GPTs astonishing breadth of knowledge, its creativity and critical reasoning skills are on par with a diligent undergraduates at best. And he would certainly recognize that this undergraduate suffers from severe anterograde amnesia, unable to form new relationships or memories beyond its intensive education. But still: Imagine the scale of Turings wonder. The computational entity on the laptop in front of him is, in a very real sense, his intellectual childand ours. Appreciating intelligence in our children as they grow and develop is always, in the end, an act of wonder, and of love. The Actual Alan Turing Test is not a test of AI at all. It is a test of us humans. Are we passingor failing?

Read more:

To Navigate the Age of AI, the World Needs a New Turing Test - WIRED