Elon Musk vs OpenAI, who do you support ?

Anil Matcha
41 replies
Elon Musk along with a bunch of few other companies like Apple asked for pausing AI development beyond GPT-4 for 6 months. What are your thoughts on this ? Follow me on Twitter for AI news :- https://twitter.com/matchaman11

Replies

Mehdi Rifai
When asked the question GPT-4 said this: "To determine who is "right" depends on the specific context or topic in question. It is worth noting that Elon Musk's concerns about AI safety and regulation are not necessarily contradictory to OpenAI's mission. In fact, OpenAI's Charter emphasizes long-term safety and collaboration with other research institutions to address AI's global challenges."
Artyom Trofimuk
Skinive AI: Skin Scanner, health checkup
Are you sure it's for the World and not to slow down a competitor so Elon can have time to create own solution? I recall that Elon was once not given the CEO role at Open AI and is now a bit resentful of the guys
Artyom Trofimuk
Skinive AI: Skin Scanner, health checkup
@viktor_shpudeiko Let's watch, time will tell who was right. But it gets more and more interesting
Dr. Viktor
Skinive AI: Skin Scanner, health checkup
@artyom_trofimuk1 In that case, Apple and other companies are playing along with Ilon. Why would that be? A conspiracy?
Kevin T.
Musk would do us all a favor if he would disappear.
Gabe Moronta
I think we're forgetting one thing, this isn't Elon vs OpenAI, this is a conglomerate of respected tech professionals and AI experts that are calling for the 6-month pause. Elon happened to sign the paper and by virtue of being the biggest name in there has become the face of it. The cause and purpose of the pause is legitimate, perhaps others like Musk have other self-serving reasons, but doesn't diminish the legitimacy of the paper. That said, it is unrealistic that this will happen, it's unfair for them to call for this. Progress is made by progressing, so let the research continue, and in conjunction with said research work accordingly to develop the necessary rules, regulations, safeguards or whatever other safety mechanisms they want to implement. The truth is if they really want those in place, they will work on it despite their being no official pause. Microsoft just invested however much into becoming part owners of OpenAI, do you believe they want to stop progress now that they finally have Google search on the ropes and looking tired? Google aside from Bard, has just invested many millions of money into Anthropic, do you really believe now that they feel themselves gasping for air and struggling to be relevant in their once dominated field they want to "pause"? It will continue, it should continue, and they who signed the paper should show they were honest about their concerns in the paper and should also continue to develop those safety measures.
Ivan Ralic
Collabwriting
Collabwriting
OpenAI has exponential learning growth, and has closed sourced parts of the system. Elon donated early on to OpenAI but has lost his influence over Microsoft. Elon is building his own GAI behind closed doors, he often said that Tesla had the best AI team in the world. To build a proper GAI you need to teach it as much as possible what its like to be a human, thus: 1. Take over a platform with a lot of discussions and debates (twitter) 2. Remove any regulations and censorship so people can speak freely about anything enlarging dataset 2b. (Pause the biggest competitor for 6 months) 3. Profit
Jonas Schaller
i think there should be no or no more regulations as right now, i mean that litterly slows down the future tech which could help in so many ways. Every invention or progress has its downsides, if someone really wants to do illegal stuff with this tech he could also do it without so there is no real need in a pause of developement on this amazing tech
Kevin Lu
I think the intention makes sense and good, but it's too naive to ask OpenAI to stop for 6 months~
Dr. Viktor
Skinive AI: Skin Scanner, health checkup
I think this is the right decision, there should be rules and standards in any direction. only in this case can we talk about the benefits and safety of the product
Mélodie Girardo
Why for 6 months? The time it takes to develop its own?
TanP0l
i WILL SUPPORT WHO CONTRIBIUTS THE MOST
Tanzirul Huda
I think this is a positive step forward for the AI community. While technology is advancing rapidly, it's important to ensure that the ethical and safety implications of artificial intelligence are taken into consideration. Pausing the development of GPT-4 for six months gives us time to review and discuss these implications, and ensure that any further development is done responsibly.
James Porteous
It is likely a moot point. The military and corporations have already staked their claim. And the billionaires are more concerned about becoming millionaires than the well-being of common folks. I think this genie is out of the bottle for good.
Daniel Do
Optimized Toolbox
Optimized Toolbox
I think it's logical to stop just for a little bit and make improvements to the current laws. It's crazy that there are no regulations and corporations can simply release it without any guarantees it's safe. 6 months isn't too much and it's critical to simply gather some rules and best practises around it.
Richard Gao
I recently made a post about this as well I believe it's just a way for them to get ahead of the comeptition on AI
Alina Dyabina
The power of open AI is near to atomic energy, so we need some rules to use it safe. But I'm not sure that we should stop for 6 months, why exactly 6? It seems that the figure was given out of nowhere. Even ordinary laws and rules improve every day, so It may not even take five years to foresee everything.
Mirena Vasileva
@alina_dyabina agreed, it is growing at a rapid scale, we don't wait it out of control. How we came up with the 6 months period is interesting, there is always a bigger game that we are unaware of.
Nisa Meray
Not sure. AI tools are advancing rapidly; the big players are scared of not being in control. But also, new unsupervised harmful products can be bad for us all.
Mirena Vasileva
I think or at least hope it is done for the right reasons, we don't want AI to be out of control and turn into a detriment to humanity and it's safety. The fact this has been raised should be a concern in on itself.