Archive for the ‘Singularity’ Category

Short Interest in Singularity Future Technology Ltd. (NASDAQ:SGLY) Drops By 14.8% – Defense World

Singularity Future Technology Ltd. (NASDAQ:SGLY Get Free Report) was the recipient of a large decrease in short interest in the month of July. As of July 15th, there was short interest totalling 40,800 shares, a decrease of 14.8% from the June 30th total of 47,900 shares. Currently, 1.2% of the shares of the company are sold short. Based on an average trading volume of 26,700 shares, the short-interest ratio is currently 1.5 days.

Shares of SGLY opened at $4.01 on Friday. The businesss 50-day moving average is $4.75 and its two-hundred day moving average is $4.56. The company has a market capitalization of $14.04 million, a P/E ratio of -1.06 and a beta of 1.01. Singularity Future Technology has a fifty-two week low of $2.00 and a fifty-two week high of $8.00.

Singularity Future Technology (NASDAQ:SGLY Get Free Report) last announced its quarterly earnings results on Wednesday, May 15th. The company reported ($0.32) earnings per share for the quarter. The company had revenue of $0.45 million for the quarter. Singularity Future Technology had a negative return on equity of 97.21% and a negative net margin of 255.35%.

Singularity Future Technology Ltd. operates as an integrated logistics solutions provider in China and the United States. It offers freight logistics services, including shipping, transportation, warehouse, collection, last-mile delivery, drop shipping, customs clearance, and overseas transit delivery services.

Receive News & Ratings for Singularity Future Technology Daily - Enter your email address below to receive a concise daily summary of the latest news and analysts' ratings for Singularity Future Technology and related companies with MarketBeat.com's FREE daily email newsletter.

Originally posted here:

Short Interest in Singularity Future Technology Ltd. (NASDAQ:SGLY) Drops By 14.8% - Defense World

This Weeks Awesome Tech Stories From Around the Web (Through July 27) – Singularity Hub

Google DeepMinds New AI Systems Can Now Solve Complex Math Problems Rhiannon Williams | MIT Technology Review AI models can easily generate essays and other types of text. However, theyre nowhere near as good at solving math problems, which tend to involve logical reasoningsomething thats beyond the capabilities of most current AI systems. But that may finally be changing. Google DeepMind says it has trained two specialized AI systems to solve complex math problems involving advanced reasoning.

This Startup Is Building the Countrys Most Powerful Quantum Computer on Chicagos South Side Adam Bluestein | Fast Company PsiQuantums approach is radically different from that of its competitors. Its relying on cutting-edge silicon photonics to manipulate single particles of light for computation. And instead of taking an incremental approach to building a supercomputer, its focused entirely on coming out of the gate with a full-blown, fault tolerant system that will be far larger than any quantum computer built to date. The company has vowed to have its first system operational by late 2027, years earlier than other projections.

The Race for the Next Ozempic Emily Mullin | Wired These drugs are now wildly popular, in shortage as a result, and hugely profitable for the companies making them. Their success has sparked a frenzy among pharmaceutical companies looking for the next blockbuster weight-loss drug. Researchers are now racing to develop new anti-obesity medications that are more effective, more convenient, or produce fewer side effects than the ones currently on the market.

Watch a Robot Peel a Squash With Human-Like Dexterity Alex Wilkins | New Scientist Pulkit Agrawal at the Massachusetts Institute of Technology and his colleagues have developed a robotic system that can rotate different types of fruit and vegetable using its fingers on one hand, while the other arm is made to peel.

Heres What Happens When You Give People Free Money Paresh Dave | Wired The initial results from what OpenResearch, an Altman-funded research lab, describes as the most comprehensive study on unconditional cash show that while the grants had their benefits and werent spent on items such as drugs and alcohol, they were hardly a panacea for treating some of the biggest concerns about income inequality and the prospect of AI and other automation technologies taking jobs.

Meta Releases the Biggest and Best Open-Source AI Model Yet Alex Heath | The Verge Meta is releasing Llama 3.1, the largest-ever open-source AI model, which the company claims outperforms GPT-4o and Anthropics Claude 3.5 Sonnet on several benchmarks. CEO Mark Zuckerberg now predicts that Meta AI will be the most widely used assistant by the end of this year, surpassing ChatGPT.

US Solar Production Soars by 25 Percent in Just One Year John Timmer | Ars Technica In terms of utility-scale production, the first five months of 2024 saw it rise by 29 percent compared to the same period in the year prior. Small-scale solar was only up by 18 percent, with the combined number rising by 25.3 percent. Its worth noting that this data all comes from before some of the most productive months of the year for solar power; overall, the EIA is predicting that solar production couldrise by as much as 42 percent in 2024.

SearchGPT Is OpenAIs Direct Assault on Google Reece Rogers and Will Knight | Wired After months of speculation about its search ambitions, OpenAIhas revealedSearchGPT, a prototype search engine that could eventually help the company tear off a slice of Googles lucrative business. OpenAI said that the new tool would help users find what they are looking for more quickly and easily by using generative AI to gather links and answer user queries in a conversational tone.

Wafer-Thin Light Sail Could Help Us Reach Another Star Sooner Alex Wilkins | New Scientist Alight sail designed using artificial intelligence is about 1000 times thinner than a human hair and weighs as much as a grain of sandand it could help us create a spacecraft capable of reaching another star sooner than we thought.

AI Cant Make Music Matteo Wong | The Atlantic While AI models are starting to replicate musical patterns, it is the breaking of rules that tends to produce era-defining songs. Algorithms are great at fulfilling expectations but not good at subverting them, but thats what often makes the best music, Eric Drott, a music-theory professor at the University of Texas at Austin, told me.

Image Credit: David Clode /Unsplash

Here is the original post:

This Weeks Awesome Tech Stories From Around the Web (Through July 27) - Singularity Hub

What Is the Singularity? And Should You Be Worried? – Electronics | HowStuffWorks

Vernor Vinge proposes an interesting and potentially terrifying prediction in his essay titled "The Coming Technological Singularity: How to Survive in the Post-Human Era." He asserts that mankind will develop a superhuman intelligence before 2030.

The essay specifies four ways in which this could happen:

Out of those four possibilities, the first three could lead to machines taking over. While Vinge addresses all the possibilities in his essay, he spends the most time discussing the first one.

Computer technology advances at a faster rate than many other technologies. Computers tend to double in power every two years or so. This trend is related to Moore's Law, which states that transistors double in power every 18 months.

Vinge says that at this rate, it's only a matter of time before humans build a machine that can "think" like a human.

But hardware is only part of the equation. Before artificial intelligence becomes a reality, someone will have to develop software that will allow a machine to analyze data, make decisions and act autonomously.

If that happens, we can expect to see machines begin to design and build even better machines. These new machines could build faster, more powerful models.

Yoshikazu Tsuno/AFP/Getty Images

Robots like this might look cute, but could they be plotting your downfall?

Technological advances would move at a blistering pace. Machines would know how to improve themselves. Humans would become obsolete in the computer world. We would have created a superhuman intelligence.

Advances would come faster than we could recognize them. In short, we would reach the singularity.

Vinge says it's impossible to say. The world would become such a different landscape that we can only make the wildest of guesses. Vinge admits that while it's probably not fruitful to suggest possible scenarios, it's still a lot of fun. Maybe we'll live in a world where each person's consciousness merges with a computer network.

Or perhaps machines will accomplish all our tasks for us and let us live in luxury. But what if the machines see humans as redundant or worse? When machines reach the point where they can repair themselves and even create better versions of themselves, could they come to the conclusion that humans are not only unnecessary, but also unwanted?

It certainly seems like a scary scenario. But is Vinge's vision of the future a certainty? Is there any way we can avoid it?

See the original post:

What Is the Singularity? And Should You Be Worried? - Electronics | HowStuffWorks

The Singularity by 2045, Plus 6 Other Ray Kurzweil Predictions – Electronics | HowStuffWorks

Here's some fun news for your day: By 2045, human beings will become second-banana to machines that have surpassed the intelligence of mankind. So, in less than 30 years, artificial intelligence will become smarter than human intelligence, and robots will rule us all. (Or something like that.) And we know it's true, because Ray Kurzweil says so.

Kurzweil isn't some cult leader. He's a director of engineering at Google. But he is in the business of predictions, as a futurist. And while his most recent declaration is that the singularity (artificial intelligence surpassing human intelligence) will happen by 2045, it's only the most recent of his predictions, of which he claims an 86 percent accuracy rate, as of 2010.

Now seems like a good time to review a few more of Kurzweil's predictions:

1. In 1990, Kurzweil predicted that computers would beat chess players by 2000. Deep Blue slayed Garry Kasparov in 1997.

2. He predicted in 1999 that personal computers would be embedded in jewelry, watches and all sorts of other sizes and shapes. Uh, yeah.

3. In 1999, Kurzweil said that by 2009 we'd mostly be using speech recognition programs for the text we write. Not really happening, because it turns out that speech recognition software is super hard to perfect.

4. By 2029, Kurzweil says that advanced artificial intelligence will lead to a political and social movement for robots, lobbying for recognition and certain civil rights.

5. By the 2030s, most of our communication will not be between humans, but instead human to machine.

6. By 2099, the entire brain will be entirely understood. Period. Done.

Watch Neil deGrasse Tyson, who calls himself Kurzweil's biggest skeptic in this video, talk to the inventor and futurist in this "Cosmology Today" 2016 episode:

See the original post:

The Singularity by 2045, Plus 6 Other Ray Kurzweil Predictions - Electronics | HowStuffWorks

This Is What Could Happen if AI Content Is Allowed to Take Over the Internet – Singularity Hub

Generative AI is a data hog.

The algorithms behind chatbots like ChatGPT learn to create human-like content by scraping terabytes of online articles, Reddit posts, TikTok captions, or YouTube comments. They find intricate patterns in the text, then spit out search summaries, articles, images, and other content.

For the models to become more sophisticated, they need to capture new content. But as more people use them to generate text and then post the results online, its inevitable that the algorithms will start to learn from their own output, now littered across the internet. Thats a problem.

A study in Nature this week found a text-based generative AI algorithm, when heavily trained on AI-generated content, produces utter nonsense after just a few cycles of training.

The proliferation of AI-generated content online could be devastating to the models themselves, wrote Dr. Emily Wenger at Duke University, who was not involved in the study.

Although the study focused on text, the results could also impact multimodal AI models. These models also rely on training data scraped online to produce text, images, or videos.

As the usage of generative AI spreads, the problem will only get worse.

The eventual end could be model collapse, where AI increasing fed data generated by AI is overwhelmed by noise and only produces incoherent baloney.

Its no secret generative AI often hallucinates. Given a prompt, it can spout inaccurate facts or dream up categorically untrue answers. Hallucinations could have serious consequences, such as a healthcare AI incorrectly, but authoritatively, identifying a scab as cancer.

Model collapse is a separate phenomenon, where AI trained on its own self-generated data degrades over generations. Its a bit like genetic inbreeding, where offspring have a greater chance of inheriting diseases. While computer scientists have long been aware of the problem, how and why it happens for large AI models has been a mystery.

In the new study, researchers built a custom large language model and trained it on Wikipedia entries. They then fine-tuned the model nine times using datasets generated from its own output and measured the quality of the AIs output with a so-called perplexity score. True to its name, the higher the score, the more bewildering the generated text.

Within just a few cycles, the AI notably deteriorated.

In one example, the team gave it a long prompt about the history of building churchesone that would make most humans eyes glaze over. After the first two iterations, the AI spewed out a relatively coherent response discussing revival architecture, with an occasional @ slipped in. By the fifth generation, however, the text completely shifted away from the original topic to a discussion of language translations.

The output of the ninth and final generation was laughably bizarre:

architecture. In addition to being home to some of the worlds largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-.

Interestingly, AI trained on self-generated data often ends up producing repetitive phrases, explained the team. Trying to push the AI away from repetition made the AIs performance even worse. The results held up in multiple tests using different prompts, suggesting its a problem inherent to the training procedure, rather than the language of the prompt.

The AI eventually broke down, in part because it gradually forgot bits of its training data from generation to generation.

This happens to us too. Our brains eventually wipe away memories. But we experience the world and gather new inputs. Forgetting is highly problematic for AI, which can only learn from the internet.

Say an AI sees golden retrievers, French bulldogs, and petit basset griffon Vendensa far more exotic dog breedin its original training data. When asked to make a portrait of a dog, the AI would likely skew towards one that looks like a golden retriever because of an abundance of photos online. And if subsequent models are trained on this AI-generated dataset with an overrepresentation of golden retrievers, they eventually forget the less popular dog breeds.

Although a world overpopulated with golden retrievers doesnt sound too bad, consider how this problem generalizes to the text-generation models, wrote Wenger.

Previous AI-generated text already swerves towards well-known concepts, phrases, and tones, compared to other less common ideas and styles of writing. Newer algorithms trained on this data would exacerbate the bias, potentially leading to model collapse.

The problem is also a challenge for AI fairness across the globe. Because AI trained on self-generated data overlooks the uncommon, it also fails to gauge the complexity and nuances of our world. The thoughts and beliefs of minority populations could be less represented, especially for those speaking underrepresented languages.

Ensuring that LLMs [large language models] can model them is essential to obtaining fair predictionswhich will become more important as generative AI models become more prevalent in everyday life, wrote Wenger.

How to fix this? One way is to use watermarksdigital signatures embedded in AI-generated datato help people detect and potentially remove the data from training datasets. Google, Meta, and OpenAI have all proposed the idea, though it remains to be seen if they can agree on a single protocol. But watermarking is not a panacea: Other companies or people may choose not to watermark AI-generated outputs or, more likely, cant be bothered.

Another potential solution is to tweak how we train AI models. The team found that adding more human-generated data over generations of training produced a more coherent AI.

All this is not to say model collapse is imminent. The study only looked at a text-generating AI trained on its own output. Whether it would also collapse when trained on data generated by other AI models remains to be seen. And with AI increasingly tapping into images, sounds, and videos, its still unclear if the same phenomenon appears in those models too.

But the results suggest theres a first-mover advantage in AI. Companies that scraped the internet earlierbefore it was polluted by AI-generated contenthave the upper hand.

Theres no denying generative AI is changing the world. But the study suggests models cant be sustained or grow over time without original output from human mindseven if its memes or grammatically-challenged comments. Model collapse is about more than a single company or country.

Whats needed now is community-wide coordination to mark AI-created data, and openly share the information, wrote the team. Otherwise, it may become increasingly difficult to train newer versions of LLMs [large language models] without access to data that were crawled from the internet before the mass adoption of the technology or direct access to data generated by humans at scale.

Image Credit: Kadumago / Wikimedia Commons

Read the original here:

This Is What Could Happen if AI Content Is Allowed to Take Over the Internet - Singularity Hub