Google

Category
 
Recent Entries
 
Archives
 
Links
 
Visitors

You have 2325275 hits.

 
Latest Comments


 
Posted By Peter Bentley

AI is making the news. This astonishing technology is enabling some Nobel-prize-worthy new scientific breakthroughs. AI also makes the news for the wrong reasons – it’s being used to create clickbait, deepfakes, pornography, election misinformation,… and companies that help supply data may be exploitative and harmful. But in reality, none of this is new – AI is just the latest and easiest way to achieve such twisted goals.


I have a bigger problem with social media today. Our large social media companies deliberately use content curation strategies which amplify polarisation, causing harm to the fabric of our societies. They show inappropriate content to our children, causing numerous psychological issues. The saying goes that if the product is free, then you are the product. These companies treat us - their users, their “product” - as disposable. There’s always more being born. Why look after our wellbeing when they can make easy money from exploiting and amplifying our weaknesses until we break? “Churn” is one easy term used to brush under the carpet those whose lives have been ruined by social media.


But what if there were a different kind of tech company that cared about us? A company that laid its foundation in principles of improving one’s awareness of themselves, their surroundings, their “consciousness”. A company that used AI not to exploit us, but to help us understand ourselves better, so that we might make better life choices. This is the hypothetical scenario that forms the core of Michael Rosen’s new novel, The Consciousness Company. It’s a familiar story of two founders starting in a garage and inventing a new kind of AI technology. We watch as they navigate the world of investors and company growth. What’s fascinating about this story is the way it is told: we live inside the heads of the characters, who are unnamed. We hear their thoughts, their feelings, and we notice how little of their world they perceive. We see how everyone is wrapped up in their own lives, their minds following often trivial tracks that bear little relation to the physical world they inhabit. And we see how they are changed over time, in a slow awakening to their realities. Rosen imagines ever more advanced technologies as the company grows towards global market share. He shows us some of the ethical dilemmas faced by such a company, and even how one of the founders reaches a messiah-like status, while the other is left feeling inadequate.


But the question remained with Rosen’s hypothetical company – how would it make return on the investment? When it finally pivoted from the Silicon Valley notion of growth of user base to monetisation, what values might it sacrifice and what would it become? Would it mutate into another monster as our social media platforms have become? If so, would it be a monster with AI-powered mind-reading and manipulation powers that could destroy societies? It sounds all too familiar. Rosen leaves us to make up our own minds.

the consciousness company book cover
The author Michael Rosen sponsored this blog entry.


 
Posted By Peter Bentley

I've been advising thesqua.re for a while now on matters relating to data and AI. Most recently our lengthy project on sustainability is now coming to fruition. In the business accommodation rental market there's a real need to understand the carbon and general sustainability implications of your stay, which might be for a week or more in a new city. Pick inefficient accomodation and you're not really living up to your company's reputation for green choices. But there is simply nothing on the market that provides address-specific details in this respect. Perhaps there's a gross approximation of a CO2 figure, but we're shown that most of these can be an order of magnitude wrong and sometimes identical figures are given for all apartments of the same type in the entire country - not exactly providing informed choice for the consumer! Our solution is EcoGrade. We've spent several years compiling the best possible data across multiple countries to provide the best and most specific sustainability metric in the industry today. We've been winning prizes for the work - hurrah! We're now starting to look at how AI can help suplement the data when insufficient information is available. Hopefully by doing this we can improve sustainable choice and also encourage all those landlords to improve the efficiency of their accommodation!

ecograde
 

 
Posted By Peter Bentley

I was asked recently to explain why generative AI systems are so bad at text. Well, I don't think they will be for much longer but for now here's an explanation.

https://petapixel.com/2024/03/06/why-ai-image-generators-struggle-to-get-text-right/

bad generative text
 


 
Posted By Peter Bentley

Another week and another article on the latest AI developments, quoting me again. This time it's the release of GPT4 (which seems to be GPT3.5 with a few extra clever bits added on). These large language models are trained on a huge amount of data from the Internet. It gives them the apparent ability to be "clever AIs" answering almost any question we ask, writing computer code, describing the contents of images. But it's important to remember that these models are not understanding this content in the way that we do. While the data may be huge, the models do not experience our world, only our data. They also do not have the correct neural architecture to enable them to truly understand and adapt to their environments in the way that we do. Right now they're trained once then, that's it - they cannot keep up with changes. Nevertheless they're the best AI we've ever made, and their abilities are remarkable. Here's more of what I said in the article.

chatGPT
 


 
Posted By Peter Bentley

I was interviewed for BBC Science Focus magazine recently (which I also write for) and asked to explain why the current batch of AI generative image systems are terrible at producing hands.I suspect this will be a temporary "feature" of these systems as progress is fast. However for now it's entertaining, if really freaky.

Check out the article here.

freaky hands

Things are moving so quickly I'm being asked to comment on new work regularly. Here's yet another quote in an article about using large language models for robot control.


 
Posted By Peter Bentley

Large AI models keep improving, with applications based on them growing like wildfires (and I use that simile deliberately). Here's a bunch of questions I was asked recently by journalists in this area, with my replies. An NBD article that quotes some of this was published today.

1. What’s your take on the growing popularity of ChatGPT?  Why ChatGPT? Why in 2022?

ChatGPT is one of the use cases of OpenAI’s GPT 3.5 language models that came out in early 2022. ChatGPT was released at the end of 2022 and immediately caused a stir because it is very good at chatting compared to most other NLP systems to date.

2. What are the upsides of ChatGPT compared to other content creation tools?

It can automatically create content from a simple request. For example it can explain something complex in simple terms, often writing quite well. This is automatic content creation, compared to current tools that might help at most with spelling or grammar.

3. A report claims that the unicorn in Texas Jasper was winning the AI race, but ChatGPT blew up the whole game. Could you please share with us the impact of ChatGPT on products of the same kind? Will it shrink the development space or lead to M&A or closure of some startups in the AIGC industry?

Jasper is another application that uses the same underlying technology. Jasper is focussed specifically on content creation with lots of templates, and can generate nice text for websites, blogs and many other kinds of media. ChatGPT and Jasper are two of many such applications - there are more and more companies focussing on the use of these pre-trained large language models, so there will be increasing number of applications in the coming months.

4. Meta's chief AI scientist says ChatGPT is “not particularly innovative”, and “nothing revolutionary”, but it seems everyone believes it is the next big thing of the AI industry. In your opinion, will it shake the existing landscape of the AI industry and are Google and Meta lagging far behind in this field?

It is the latest large pretrained model to be released and so it causes a stir because it’s abilities are impressive. Like most of these large AI models, millions of dollars was spent developing and training them, so when something so expensive and impressive gets freely released for use, people get excited. Google and Meta have big enough budgets to catch up, and they are working on their own much bigger models, so it will not be long before they have even more impressive results to show.

5. Since works created by ChatGPT are based on a large amount of data, are there any legal risks, for example, copyright infringement or legal disputes due to errors made, harmful instructions or biased contents. And are the works themselves protected by copyright laws? Could your elaborate on that?

Yes, there are many problems around training of the models. Often huge amounts of data scraped from the Internet are used as training data. It means that when ChatGPT suggests new computer code for you, it may be duplicating the code that someone else wrote - and that code may be protected by copyright. The same could apply to any content produced by these systems. At present there are few countries that permit computer-generated content to be copyrighted itself.

6. What are the downsides or limitations of ChatGPT? The chatbox is said to have limited knowledge of world and events after 2021, will this hinder its further development or commercial use?

ChatGPT is based on a model completed in early 2022, so it cannot be aware of anything after that. New versions are being produced now, which will know more. However, every time one of these massive models is trained it requires huge computational resources and huge costs - this is not really very sustainable in terms of cost or the environment, so it would be better in the future to have more effective, smaller models, each perhaps focussed on a specific kind of data, that can be trained faster. At present the different companies are in an arms race towards bigger and bigger models so this is not moving in a sensible direction.

7. In your opinion, in which area can ChatGPT be maximized and fully leveraged since it can do a lot of things, such as writing papers and creating music?

It’s hard to predict which areas will benefit most as this technology is still very new. However there are some concerns that these models can “hallucinate” results - they provide content that looks correct but is entirely fake. This is extremely dangerous in science and education where we need to ensure the accuracy of everything, so considerable work is still needed to prevent this effect for such applications. Having inaccurate results is also not useful for search engines or journalism, too! Creating strange or unusual results is not a problem in the arts, so for fiction, poetry, music or art there are many immediate applications. Sadly, the easiest use-case is for online content such as advertising or so-called “fake news” or “clickbait” where accuracy is irrelevant. If the technology is misused in this way, then we have the potential to fill the Internet with computer-generated junk - something that Chat GPT could do at unbelievable speeds (and it’s already very bad with human-generated junk).

8. If we look at a bigger picture, what are the trends of the AIGC industry and which niche areas will be highly favored by venture capitals?

The venture capitalists will favour those applications that look like they will make the most money fastest. These applications may not always be the ones best for our societies. My personal preference would be applications that carefully shape the AIGC industry. We do not need more computer-generated online content - we have enough content just generated by people. In my view, applications that focus on this area are short-sighted and harmful. We do need better ways to educate people and to let people discover useful new findings hidden within large amounts of data. Most importantly, today we need better methods to verify the accuracy of content. We need to be able to trace exactly where every claim, image, or line of code originated, so that we can verify if it is true and real. If we cannot do this, the Internet may become nothing more than an ocean of random fictions. That’s a big problem if future AIs are trained using it!

9. Does Google's specific business overlap with ChatGPT?What are the advantages of both of them?

ChatGPT has the potential to change how we search. Instead of us trying many different search terms in the search engine and looking through the list of results (and adverts), a large language model can do things differently. We can ask the model a question in normal sentences and it can summarise the topic in a nicely written piece of text, providing links to the source webpages that it used. The advantage of this approach is that we do not have to read through long lists of results any more. The disadvantage is that ChatGPT and systems like it may not always return correct results. Sometimes they even “hallucinate” results - they make up entirely fake statements that sound correct. So a good search engine using ChatGPT technology would need to be very carefully created so that it only produces results that are entirely correct and accurately summarise real content on websites.

10. Do you think ChatGPT will pose a substantial threat to Google as a whole? Why?

Because search might change in this way, there would no longer be the same opportunity for the search engine to return adverts. The entire business model of Google is based on adverts - if you lose this opportunity, then your business is in trouble. I suspect Google will integrate these technologies into search quite quickly. Somehow they will have to find a way to keep providing adverts at the same time, while also making it clear to the user which results are linked to normal websites and which are from sponsored content. I’m sure they will design a method to do this quickly!


 
Posted By Peter Bentley

With the "launch" of Tesla's somewhat embarrassing humanoid robots on Friday, I was asked to give my views on this kind of product for China's NBD. I gave the quote before the launch but I haven't changed my mind after seeing what they have acheived so far...

You can read the translated article here: https://www-nbd-com-cn.translate.goog/articles/2022-10-01/2488089.html?_x_tr_sch=http&_x_tr_sl=fr&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp

Tesla humanoid robot Optimus Prime debut Image source: Tesla AI Day video screenshot
 


 
Posted By Peter Bentley

DALLE-2 is a fascinating example of a generative AI system that can create images from a textual description. As an experiment, I gave it a series of my six word science fixtion stories. Sometimes it completely misses the mark - it can't read between the lines or imagine anything at all. But sometimes it produces really interesting illustrations of the stories. I've been posting a few examples on twitter @peterjbentley. Here's one of my favourites: Old spacecraft make lovely hanging baskets.

old spacecraft make lovely hanging baskets


old spacecraft make lovely hanging baskets


old spacecraft make lovely hanging baskets


old spacecraft make lovely hanging baskets
 


 
Posted By Peter Bentley

Recently the topic of AI and sentience has come up again, after someone claimed that Google's LaMBDA seems to think for itself... It's remarkable that a piece of software can have conversations that seem so realistic, but a good simulation and the real thing are still distinguishable. I was interviewed about the story for NBD China. The article appeared here. You can see Google's translation here.

NDBAI
 


 
Posted By Peter Bentley

A while ago I was interviewed by Tevy Kuch for his article on AI influencers. I gave my usual honest opinions about things..! It's now been published by New Scientist here: https://www.newscientist.com/article/mg25433900-800-the-rise-of-computer-generated-artificially-intelligent-influencers/

AI influencers new scientist