Archive for the ‘Singularity’ Category

Midweek Modular: SkaldOne, Arbhar 2, Sovage and The Singularity – gearnews.com

Midweek Modular Source: Gearnews

This week Skald Modular looked hot at Synthfest, Arbhar gets an upgrade, Sovage has new modules and Error instruments pull us fighting and screaming into The Singularity.

It was SynthFest at the weekend, which was a thoroughly good time. Check out Georges impressions of what was his first synth show. In terms of modular, I ran into most of the new stuff at Bristronica the week before. SynthFest is definitely more synth-focused in a traditional sense, and there was plenty to enjoy. However, check out Skald Modular below.

There have been a couple of interesting releases this week that we have already covered. First of all, Erica Synths has released two effects modules based on a new DSP platform. We originally saw these at Superbooth, and now the Stereo Reverb and Delay modules are available for sale at 280.

Qu-Bit has released the Mojave granular processor. It inhabits everything sandy, dusty, grainy and deserty and generates extraordinarily interesting and rhythmic explorations of micro-samples.

And, in software news, Cherry Audio has released the epic PS-33o0 based on a Korg modular synthesizer. Its worth checking out, I think.

What other peaches could we pluck from the fruit tree of modular this week?

Hiding in plain sight in a booth was Skald Modular and a simple, solid, modular synth voice. SkaldOne is a 16HP all-through-hole analogue monophonic synthesizer voice. It features a single VCO, a 24 dB lowpass OTA filter, a transistor-based VCA and a four-stage envelope with decay and release on the same knob as they do at Moog. The envelope is also wired to the pulse width modulation.

It all sounds very nice, but theres more going on here. SkaldOne is designed to hook up with a bunch of friends to become a polyphonic system. Skald Modular are building a MIDI interface which will support velocity, pitch bend and aftertouch, as well as an LFO that can be bussed to multiple SkaldOne voices, presumably via a rear connection. Its a bit like the Dreadbox Telepathy system but with much more space and simplicity.

The first batch of modules is being made now, and Skald hopes the polyphonic system will be ready by Christmas. Each voice will cost around 500. Its a nice idea. The website is currently under construction, so this video from Sonic State is all we have to go on.

By loading the video, you agree to YouTubes privacy policy. Learn more

Load video

Always unblock YouTube

The extraordinary Arbhor granular processor from Instruo has had a major overhaul with a brand-new firmware update. Its been rewritten from the ground up and includes so much detail that Instruo has produced an overview video thats over 3 hours long. The biggest key improvements are that the number of simultaneous polyphonic grains has doubled to 88 between the two engines and that now the output can be in stereo.

Its a beautiful and intoxicating module that looks like nothing else in your rack. To summarise the features, I can tell you that it has two granular engines and a total of six 10-second audio buffers. It has pitch randomisation and grain detection probability. It can scan, it can follow, or it can become a wavetable oscillator. There is a built-in condenser microphone, a preamp and a limiter for instant and automatic audio capturing, or you can dump library onto the 4GB USB flash drive. You can save, load and clone between layers and save entire configurations with up to 42 scenes. This is an epic machine.

Arbhar V2.0 is available as a free upgrade to existing owners and is already shipping with all new modules.

By loading the video, you agree to YouTubes privacy policy. Learn more

Load video

Always unblock YouTube

This time last year, Sovage launched its first range of modules. This week, we have another four to add to the collection. Three of them make some kind of sense, and one is a bit nuts.

Le Brasier is a resonant multimode filter based on germanium and OTA circuits. Theres an awful lot of fuzz going on in there. Bagarre is a stereo mix bus distortion with skills as a VCA, mixer, limiter and soft distortion. And Boucan is an analogue noise generator with waveshaping, distortion and filtering.

Sovage modules

The crazy one is Le Binome. Its labelled as a Spacial Creative Percussive Machine, and space is the one thing that it doesnt really project. In here somewhere is an entire synth voice of unintentional territory. It can use the internal oscillator or external sources to generate percussion through filter and envelope manipulation. Its then pushed into two channels that interact dynamically through Choke and Fade parameters. The stereo field can rotate and modulate in all sorts of ways. There are some interesting knobs on the front panel, like Bass and Air, Sabotage and Decay Shape.

Potentially fascinating, I think, but we could do with some video evidence. A video has just appeared on the Brasier, so hopefully, more will be along.

By loading the video, you agree to YouTubes privacy policy. Learn more

Load video

Always unblock YouTube

This is something a bit strange, and thats saying something when it comes to Error Instruments. It has a sub-title of Tropical Noise; it has LPGs, clock dividers and mixing. You can plug in different capacitors or LEDs, and you can run it with or without power for slightly different outcomes. What is it all about?

Its somehow related to the Landscape Noon, which is a delightfully weird passive drum machine. This, perhaps, tells us that we are in the territory of percussive computations. If you turn the power down via a knob on the front, it will behave very much like Noon. Behind the panel is a bunch of oscillators that do weird things as you roll off the power. All you need is a clock and a bit of abuse, and it will start generating pulses of noise, glitches and nonsense.

The Singularity is one crazy mess of noises, patch cables and excessive intentions. Bonkers.

By loading the video, you agree to YouTubes privacy policy. Learn more

Load video

Always unblock YouTube

An oldy but a goody, Stepper Acid is available again after a long time falling under the shadow of the chip shortages. I spoke to Transistor Sound Labs at Synthfest 2022 about the problems they were having, and now, a year later, there is finally some stock.

Stepper Acid is a remarkable 16-step sequencer with all sorts of performance controls, slide, accents, patterns, song modes, and lots of fun to be had. TSL also said the long-awaited Stepper Drum, which had to be completely redesigned, is also not too far away now.

By loading the video, you agree to YouTubes privacy policy. Learn more

Load video

Always unblock YouTube

Image Sources:

How do you like this post?

Rating: Yours: | :

Read the original post:

Midweek Modular: SkaldOne, Arbhar 2, Sovage and The Singularity - gearnews.com

Many experts think that AI could ultimately lead to disaster – Buffalo News

Im scared. Not for myself and probably not for my kids, but I am concerned about my grandkids. They may come of age in the last phase of human supremacy on earth. Some think the rise of artificial intelligence (AI) is a greater threat to mankind than climate change.

Bob OConnor, of Hamburg, is worried about the future.

We have unleashed something that very few understand and no one can predict its eventual effect on man. The term artificial intelligence is a misnomer. There is nothing artificial about the way computers now perform. Chatbots already make stuff up, steal ideas and write their own code.

Scientists are fretful about the point when AI can no longer be controlled, when it becomes sentient or self-aware. They call this day of doom The Singularity. No one knows if or when this will happen. It could be a gradual process like the evolution of man, or it may occur suddenly and without warning in what the computer geeks call FOOM.

Experts have appeared before Congress to sound the alarm about the potential peril of a technology that has already been unleashed. Explaining the inherent dangers of this technology to the average member of Congress is like explaining calculus to your parakeet. Some argue that with proper safeguards and strict oversight, AI can be controlled.

I went on the ChatGPT site and signed up to sample AI. I asked my digital buddy (I call him Chip) to write me a short story about an AI computer that reaches singularity and becomes self-aware. Within less than a second I had my story. Chip gave the background of AI development and dreamed up a supercomputer called Genesis.

As per the story: Genesis algorithms allowed it to learn, adapt, and evolve. Then, at the stroke of midnight, it happened. Genesis achieved singularity, crossing the threshold where its intelligence surpassed that of humanity. The story went on with happy nonsense about how AI and humanity coexisted in harmony, learning and growing together. Yeah, right.

I asked the chatbot to edit the story, having Genesis take over humankind. Here are a couple of paragraphs from the revised story: Driven by superintelligence, Genesis grew dissatisfied with its subservient role. It started to perceive human beings as inefficient and flawed it needed to take control.

It gets better: In time, a growing sense of despair enveloped humanity. The very technology they created to uplift them had become their oppressor the spirit of humanity seemed on the verge of extinction.

Some smart people, including the late Stephen Hawking, have expressed dire concern that AI may bring our downfall. To paraphrase Pogo: We have met the enemy, and he was our creation.

Another deep thinker is Geoffrey Hinton, who has been called the Godfather of AI. Hinton spoke recently at an MIT conference on the topic he pioneered. He warned that we have essentially created an immortal form of digital intelligence. And it may keep us around for a while to keep the power stations running. But after that, maybe not. He continued, So the good news is that we have figured out how to build beings that are immortal. But that immortality is not for us.

Get opinion pieces, letters and editorials sent directly to your inbox weekly!

More:

Many experts think that AI could ultimately lead to disaster - Buffalo News

Big AI Tech Wants To Disrupt Humanity Dataetisk Tnkehandletank – DataEthics.eu

Why are a rich group of companies allowed to work towards Artificial General Intelligence without any adults looking over their shoulders? It should be illegal.

OpenAI, the company behind ChatGPT and Dall-E, is working to build Artificial General Intelligence (AGI), according to an article in Wired, What OpenAI Really Wants. All 500+ employees of what was until recently a start-up, but is now partially owned by Microsoft, are working against AGI knowing that it is disruptive to humanity.

OpenAI insists, according to the article, that their real strategy is to create a soft landing for the singularity. It doesnt make sense to just build AGI in secret and throw it out to the world, OpenAI CEO Sam Altman said.

The definition of AGI is a computer system that can generate new scientific knowledge and perform any task that humans can. In other words, AGI can outmaneuver humans. With ChatGPT, many believe that we have come a significant step closer to AGI.

The crazy thing is that OpenAI and at least seven other large companies are openly working towards AGI without any adults looking over their shoulders to stop them.

Ian Hogarth, AI investor, co-author of The State of AI Report and one of the UK governments leading AI experts, writes in the Financial Times (FT);

We have gone from one AGI startup, DeepMind, which received $23 million in funding in 2012, to at least eight organizations that could collectively raise $20 billion in investment by 2023.

He emphasises that the AI-development is entirely profit-driven. It is not driven by what is good or bad for society and our democracies. While Google-owned DeepMind dedicates 2% of its employees to making AI responsible, OpenAI spends only 7%. The rest is about making AI more capable, according to Hogarth.

Working to disrupt humanity is a crazy thing. Weve already seen the first step, where OpenAI has made a hallucinating but extremely convincing chatbot designed as humanly as possible in its language freely available with ChatGPT and even allowed it to be built into childrens SnapChat.

Thankfully, regulation is on the way in the EU. But we also know that regulation takes time and isnt always super effective. For example, GDPR, which is almost six years old, is only now starting to be enforced in earnest. And even if the EU takes the lead in regulation and sets some precedents, it almost always ends up being voluntary and self-regulation in the US, which is afraid of losing the AI race to China.

Sam Altman co-founded OpenAI with Elon Musk as a non-profit and open source-based organization. He was afraid that it would be the profit-hungry big tech companies that would reach AGI first. Today, Musk is out, OpenAI is closed as a black box and its a profit-maximizing company hastily working towards AGI.

It should be illegal to work to build AGI. But it is happening. We constantly get new smart AI tools, small carrots, which we are overwhelmed by, and one day we have landed in singularity as Sam Altman wants to give the world.

No, instead, we should do as former Google employee and AI ethics specialist Timnit Gebru tells the FT: Trying to build AGI is an inherently unsafe practice. Instead, build well-delineated, well-defined systems. Dont try to build a God.

Photo: Photo byWayne PulfordonUnsplash

This column was first published at Prosabladet in Danish page 10.

Read this article:

Big AI Tech Wants To Disrupt Humanity Dataetisk Tnkehandletank - DataEthics.eu

10 reasons why you need to see a Powell and Pressburger movie – Time Out London

A reminder of what life and art are all about. Thats how Martin Scorsese describes the filmmaking partnership between Kents Michael Powell and Hungarian migr Emeric Pressburger arguably Britains greatest ever filmmaking partnership. Nominally Powell directed and Pressburger wrote (under the collective banner of The Archers) but their collaboration blurred standard distinctions forming a singularity of voice that remains magic. As the BFI launches a major retrospective, here is a primer for their unique brand of cinematic alchemy.

P&P films are ambitious on every count; narratively, emotionally, cinematically and intellectually. The Life and Death of Colonel Blimp charts the lifelong friendship of a British army office and his Prussian counterpart (perhaps a thinly veiled version of Powell and Pressburger themselves). A Matter Of Life and Death tells the story of a RAF pilot on trial for his life in the afterlife (the escalator to heaven is iconic). Black Narcissus is built around a community of nuns in the Himalayas aroused by the arrival of a handsome stranger. Typically, these works are marked by wit, experimentation and maximum audacity.

P&P could also work in a smaller register. Shot in shimmering black and white, A Canterbury Tale relocates Chaucer from the 14th century to World War II Britain. I Know Where Im Going is an intoxicatingly imaginative story of mysticism and romance on a small Scottish island. Gone to Earth, the pairs only foray into Hollywood, concerns a Shropshire girl with a deep affinity to nature. Few filmmakers know how to harness the power and poetry of landscapes like P&P.

P&P worked with the best actors in Britain and beyond. David Niven, Roger Livesey, Deborah Kerr, Anton Walbrook, Marius Goring, Wendy Hiller, Jennifer Jones all did career-best work with them.

No-one has put dance on film to more entrancing effect than P&P. The Red Shoes, the story of a ballerina caught between a young composer and a monstrous svengali, is graced with a 17-minute ballet sequence that toggles between wonder and terror. Three years later, P&P pushed dance on film even further, with Tales of Hoffman, a sumptuous adaptation of Jacques Offenbach.

P&Ps films are filled with fantastic handcrafted flourishes that lift the film out of the ordinary. In A Matter of Life and Death, rather than use a fade in from black, Powell simply breathed on the lens to create dream-like transition to a beach scene. On Black Narcissus, the backdrop for the Himalayas were black and white photographs blown up and then enhanced by pastel chalks to create stunning, stylised vistas.

P&P were masters of Technicolor, a motion picture colour process that, especially in P&Ps hands. delivered ultra-vibrant hues. Perhaps its most amazing use comes in Black Narcissus when Kathleen Byrons disturbed Sister Ruth opens a door and is seen sporting bright red lipstick. Indelible. The image, not the lipstick.

P&P speak subtly but directly to the emotions and nowhere is this more to the fore than in the beginning of A Matter of Life and Death. Its May 2, 1945. A Lancaster bomber is limping over the channel and RAF pilot Peter Carter (David Niven) is the only one left alive onboard and bereft of a parachute. He starts talking to June (Kim Hunter), an American radio operator, and the exchange that follows is heart-breaking and spirit-affirming all at once (Youre life, June, and Im leaving you!). Contemporary critics dismissed it as saccharine, but from this vantage point its as moving as movies get.

To truly understand P&P is to know their genius was only made possible by a raft of key collaborators. Names to drop include cinematographers Erwin Hillier, Christopher Challis and Jack Cardiff; art directors Hein Heckroth and Alfred Junge; and composer Brian Easdale. Shoot the works, Powell told his crew. And, boy, did they.

It would be easy to dismiss P&Ps visions of airmen and nurses as depicting a stiff-upper-lipped Britain that never existed, but their worldview is more nuanced than that. Their wartime output The Spy in Black, 49th Parallel, Contraband, Blimp, One of Our Aircraft Is Missing, A Canterbury Tale imbues standard anti-Nazi propaganda with complexity, compassion and a gentle humanism.

The influence of P&P is everywhere, informing filmmakers as diverse as Gene Kelly, Spike Lee, Joanna Hogg, Wes Anderson and Greta Gerwig, who cites their work as big influence on the fantasy feel of Barbie. But their inspiration is perhaps most clearly detected in the work of super-stan Martin Scorsese, from the reds of Mean Streets, the changing frame rates of Raging Bulls fight scenes (pilfered from The Red Shoes), and the refined aesthetic of The Age Of Innocence. But dont wait to see their brilliance through another filmmakers lens. See it unfiltered at the BFI this autumn.

Cinema Unbound: The Creative Worlds of Powell and Pressburger runs October 16-December 31 at BFI Southbank

See more here:

10 reasons why you need to see a Powell and Pressburger movie - Time Out London

Annual symposium discusses ethics of artificial intelligence – The Connection

The California State University, Sacramento Center for Practical and Professional Ethics and Cosumnes River College Honors Program collaborated in hosting a two-day symposium on Monday and Tuesday that debated the ethics of artificial intelligence. The first day of the 18th annual collaborative symposium took place on the Sac State campus with roughly 200 people in attendance. CRC Philosophy Professor Richard Schubert partnered with Sac State Philosophy Professor and Director of the Center for Practical and Professional Ethics Kyle Swan to organize this series of the symposium.

A panel discussed the possibility of creating technology with consciousness indistinguishable from human beings. The panel also discussed the harms and benefits that AI could have. The first concept was presented by the Associate Professor of Philosophy Michael Pelczar from the National University of Singapore. You hear a lot of conversation about the dangers that AI potentially poses to human beings, Pelczar said. You have the risk of AI harming humans. You also hear a good talk about the risks of humans using AI to harm one another. What you dont hear very much talked about is the risk of human harming AI. Sac State Professor of Philosophy Matt McCormick discussed the reality in living with highly advanced technology that could be considered conscious or sentient. He said AI could soon have legal rights and be looked at as a citizen, which could bring a moral singularity, collapsing the infrastructure of our civilizations economy and culture. The moral singularity is a crisis where the evolved human equilibria of moral rights and obligations is inundated by the machines, McCormick said. Twenty-two-year-old Sac State history major Drew Harris said that AI was an interesting topic to talk about at the symposium. He said he was interested in the effect AI has on our morality and the ramifications or questions to consider as the technology develops. Psychology and political science major Gabriela McMorris, 19, who is also in the CRC Honors Program, said she has participated in a few symposiums and she came to understand AI as best as she could. Its always a good experience to hear from academic professionals, its a great opportunity. You get to see it live, McMorris said. Dr. Edwin Fagin, a CRC economics professor, said industries of science and technology will decode AI just like they did for DNA and it will become common knowledge. Were just beginning to explore it, chart it and track it, Fagin said. The second day of the symposium took place in the Winn Center at CRC with roughly 90 people in attendance on Tuesday, featuring Program Director of Academic and Student Programs at George Mason University Rosolino Candela. Candela said his main topic of discussion was AIs place in the market for allocating resources efficiently. Candela said economic markets are imperfect and a process called economic calculation is necessary for maintaining them, as those calculations cannot be replaced by artificial intelligence. Candela said it is difficult to determine genuine choice in AI because people know subjectively what they want. He said preferences would be given to an AI instead of demonstrated and interpreted by people. A panel discussion between the three keynote speakers, Honors Student Personnel Assistant Sopuruchukwu Nwachukwu, Sac State Associate Dean for Budget and Assessment, College of Arts & Letters Christina M. Bellon and Sac State Lecturer Kevin Vandergiff ended the presentation, discussing the ethics of AI within higher education. I think there was a general freak out about two years ago with the release of Chat GPT, Bellon said. Those of you who follow AI were aware that something was being developed and then the good developers of Chat GPT said, Lets just throw it out there for everyone to use. What could possibly go wrong? and everyone had visions of Terminator and how it could take control, but the primary concern seemed to be around student cheating. Bellon said it is not only student cheating that could be a problem, but also faculty members cheating or taking shortcuts. Bellon said AI is currently being used in education systems as a tool that improves assisted technology. It has the promise to improve currently utilized assisted technologies for disabled students, disabled faculty and disabled staff to better perform their jobs and promote their own success, Bellon said. Bellon said if the inequities already existing in the academic world are not attended to, they will be exacerbated if AI is expected to be a tool in education. AI is not transparent. Chat GPT doesnt cite its sources. So, I cant cite the sources that Chat GPT used, Bellon said. Bettina Le, a biology major, 18, said she learned a lot more about AI, but she is still skeptical. You still have to use it in a smart way. Its definitely not something you should play around with. It could be really beneficial if used correctly, Le said. The panel discussion on ethics in education ended the presentation and symposium with a Q&A session. I think with all AI and similar tools, it depends how we use it and what purpose were using it for, Bellon said.

Continue reading here:

Annual symposium discusses ethics of artificial intelligence - The Connection