For the longest time, Mozilla was synonymous with the Firefox browser, but for the last few years, Mozilla has started to look beyond Firefox, especially
I don’t think AI will be a fad in the same way blockchain/crypto-currency was. I certainly think there’s somewhat of a hype bubble surrounding AI, though - it’s the hot, new buzzword that a lot of companies are mentioning to bring investors on board. “We’re planning to use some kind of AI in some way in the future (but we don’t know how yet). Make cheques out to ________ please”
I do think AI does have actual, practical uses, though, unlike blockchain which always came off as a “solution looking for a problem”. Like, I’m a fairly normal person and I’ve found good uses for AI already in asking it various questions where it gives better answers than search engines, in writing code for me (I can’t write code myself), etc. Whereas I’ve never touched anything to do with crypto.
AI feels like a space that will continue to grow for years, and that will be implemented into more and more parts of society. The hype will die down somewhat, but I don’t see AI going away.
The thing is, AI has been around for a really long time and has lots of established use-cases. Unfortunately, none of them are to do with generative language/image models. AI is mainly used for classifying data as part of data science. But data science is extremely unsexy to the average person, so for them AI has become synonymous with the ChatGPTs and DALLEs of the world.
Yeah, so far we’ve had discriminative AI (takes complex input, gives simple output).
Now we have generative AI (takes simple input, gives complex output).
I imagine, the discussion above is about generative AI…
I’ve found good uses for AI already in asking it various questions where it gives better answers than search engines, in writing code for me (I can’t write code myself), etc.
I’d caution against using it for these things due to its tendency to make stuff up. I’ve tried using ChatGPT for both, but in my experience if I can’t find something on google myself, ChatGPT will claim to know the answer but give me something that just isn’t true. For coding it can do basic things, but if I wanna use a library or some other more granular task it’ll do something like make up a function call that doesn’t exist. The worst part is that it looks right, so I used to waste time trying to figure out why it doesn’t work for me, when it turns out it doesn’t work for anybody. For factual information, I had to correct a friend who gave me fake stats on airline reliability to help me make a flight choice - he got them from GPT 4 and while the numbers looked right, they deviated from other info. In general you never want to trust any specific numbers from LLMs because they’re trained to look right rather than to actually be right.
For me LLMs have proven most useful for things like brainstorming or coming up with an image I can use for illustration purposes. Because those things don’t need to be exactly right.
A lot of people put a lot of money into it and they won’t give them up. They’ll keep buying and selling, keeping it sort of artificially afloat even if it has no real world usage. Well there is actually one which leads me to the next point
The illegal market (and gambling) has a use case for cryptocurrencies so they actually use them
But to put it simply - they don’t die because they don’t have to. There is no single company that would pull the plug. By it’s design, they can coexist in our world and no one can stop it, doesn’t matter if people use it or not
It’s like a torrent with millions of seeders. As long as there is at least one seeder, the torrent will exist even if the files it contains aren’t really useful
It’s not.
It’s massively expensive though. There’s money pouring into it because it’s the next big thing. Eventually, the companies that can afford to consistently power a massive LLM learning server farm will be the ones to keep going, the rest will flounder, get acquired, or disappear.
Mozilla isn’t a big enough fish and won’t get acquired. AI is not a fad, but it’s not a sustainable business model for a company like Mozilla so I hope all their eggs aren’t going in that basket.
Hell I think there’s a solid argument to be made that it’s not even a sustainable model for the biggest players. As it stands they’re offering remarkably little functionality for how much it costs them. On the other hand, mozillas work in this space up until now has largely been on bringing previously unimaginable functionality to locally hosted open source models and datasets. And that does look to be a sustainable business model.
I was very sceptical towards the recent hypes (space exploration, cryptocurencies, self driving cars, …) which turned out to be fads but this time … this time I’m going to guess it isn’t going to be a fad. Well it depends what we imagine by “AI” - will you have a robot pal like in movie I Robot or AI Artificial Intelligence? Probably not. Will AI predictions and learning be put into majority of programms and quite clever AI voice-assistants will appear like in movie Her? Yeah, I guess this could happen. My main reasons are:
It actually isn’t that difficult, machine learning isn’t new and very theoretically speaking, as long as you have enough computation power, nothing is stopping you. Like at the moment I can’t think of any limit
Laws to stop it would be very difficult. You cannot just say “No AI!”, I mean people can run it at home, how do you want to stop it? Which leads me to other point
The OpenSource community had also made progress in the area
I’m afraid that if AI ends up being just a fad, Mozilla won’t be able to recover from this bet.
I don’t think AI will be a fad in the same way blockchain/crypto-currency was. I certainly think there’s somewhat of a hype bubble surrounding AI, though - it’s the hot, new buzzword that a lot of companies are mentioning to bring investors on board. “We’re planning to use some kind of AI in some way in the future (but we don’t know how yet). Make cheques out to ________ please”
I do think AI does have actual, practical uses, though, unlike blockchain which always came off as a “solution looking for a problem”. Like, I’m a fairly normal person and I’ve found good uses for AI already in asking it various questions where it gives better answers than search engines, in writing code for me (I can’t write code myself), etc. Whereas I’ve never touched anything to do with crypto.
AI feels like a space that will continue to grow for years, and that will be implemented into more and more parts of society. The hype will die down somewhat, but I don’t see AI going away.
The thing is, AI has been around for a really long time and has lots of established use-cases. Unfortunately, none of them are to do with generative language/image models. AI is mainly used for classifying data as part of data science. But data science is extremely unsexy to the average person, so for them AI has become synonymous with the ChatGPTs and DALLEs of the world.
Don’t worry, once the hype fades, we can start calling LLMs “machine learning” again
deleted
Yeah, so far we’ve had discriminative AI (takes complex input, gives simple output).
Now we have generative AI (takes simple input, gives complex output).
I imagine, the discussion above is about generative AI…
I’d caution against using it for these things due to its tendency to make stuff up. I’ve tried using ChatGPT for both, but in my experience if I can’t find something on google myself, ChatGPT will claim to know the answer but give me something that just isn’t true. For coding it can do basic things, but if I wanna use a library or some other more granular task it’ll do something like make up a function call that doesn’t exist. The worst part is that it looks right, so I used to waste time trying to figure out why it doesn’t work for me, when it turns out it doesn’t work for anybody. For factual information, I had to correct a friend who gave me fake stats on airline reliability to help me make a flight choice - he got them from GPT 4 and while the numbers looked right, they deviated from other info. In general you never want to trust any specific numbers from LLMs because they’re trained to look right rather than to actually be right.
For me LLMs have proven most useful for things like brainstorming or coming up with an image I can use for illustration purposes. Because those things don’t need to be exactly right.
If it was a fad then why does the crypto currency simply doesn’t die? Because I’m waiting on that for some time and nothing really happens.
I’d see 2 reasons:
But to put it simply - they don’t die because they don’t have to. There is no single company that would pull the plug. By it’s design, they can coexist in our world and no one can stop it, doesn’t matter if people use it or not
It’s like a torrent with millions of seeders. As long as there is at least one seeder, the torrent will exist even if the files it contains aren’t really useful
deleted
It’s not. It’s massively expensive though. There’s money pouring into it because it’s the next big thing. Eventually, the companies that can afford to consistently power a massive LLM learning server farm will be the ones to keep going, the rest will flounder, get acquired, or disappear. Mozilla isn’t a big enough fish and won’t get acquired. AI is not a fad, but it’s not a sustainable business model for a company like Mozilla so I hope all their eggs aren’t going in that basket.
Hell I think there’s a solid argument to be made that it’s not even a sustainable model for the biggest players. As it stands they’re offering remarkably little functionality for how much it costs them. On the other hand, mozillas work in this space up until now has largely been on bringing previously unimaginable functionality to locally hosted open source models and datasets. And that does look to be a sustainable business model.
I was very sceptical towards the recent hypes (space exploration, cryptocurencies, self driving cars, …) which turned out to be fads but this time … this time I’m going to guess it isn’t going to be a fad. Well it depends what we imagine by “AI” - will you have a robot pal like in movie I Robot or AI Artificial Intelligence? Probably not. Will AI predictions and learning be put into majority of programms and quite clever AI voice-assistants will appear like in movie Her? Yeah, I guess this could happen. My main reasons are: