Thoughts, how do you cope with generative AI?
Darmawan Disiek
8 replies
Hey everyone! πββοΈ So, I've been diving into this whole generative AI space recently and, honestly, it's blowing my mind a bit. I mean, we're in a time where machines can whip up images, videos, and all sorts of content. And it's becoming super tricky to tell if it's made by a human or some fancy AI. How wild is that?
Anyone else feeling a bit weirded out by not knowing what's human-made and what's AI-generated? Do we risk not trusting anything we see online?
Replies
AndrΓ© J@sentry_co
It will solve it self. More validators and authenticity badges. IMO
Share
While I personally appreciate and often practice AI-generated content, such as articles refined by ChatGPT or images generated by MidJourney, I personally find it is unacceptable for two use cases: 1) generating and spreading misinformation; 2) using generative AI to create articles or artworks and then falsely claim them as their own.
You are not the only one concerned about these matters. Indeed, with the advancement of AI technology, AI-generated content is becoming increasingly indistinguishable from human-made content. However, I believe that in most cases, as long as AI's autonomous creations do not have a negative impact on us (excluding situations where humans intentionally guide AI to do certain things, like content related to disinformation, trust, and privacy), it is acceptable.
@jessicaliu
For now I believe it's required for a greater good. But as it got more popular, I expected there will be more misuse of AI.
The way I see it, the only solution is either more education, regulation, or initiative from some tech enterprise.
Hi Darmawan, you shared this question in our group. I think it's better if I answer it here.
Let me address this with my personal point of view first. Since I basically work with lots of AI generated stuff, I can more or less accurately tell if a content is AI generated or not. But of course, this is not applicable for those who are less familiar with it.
Now let me put on my organization cap. Hi, Steffen from Numbers Protocol here. We are working on this exact same problem (on a larger scale), not limited only to AI, but also with content created by humans. Bringing back trust to digital media is one of our goals. And luckily for you, and most of the people struggling with not trusting digital media anymore, there has been some initiative from us, and global tech leader to counter this phenomenon.
Recently Microsoft announced all AI-generated media created with their platform will have an invisible digital watermark added on it. This digital watermark is based on standard set by the Coalition for Content Provenance and Authenticity (C2PA).
Using the C2PA watermark, you can verify a content authenticity on the CAI (Content Authenticity Initiative) site. We made a comprehensive article about it, you may be interested to read more : https://www.numbersprotocol.io/b...
Provenance, is the key. You will hear more about provenance in the not so distant future, especially looking at the rate of generative AI being developed. We have more resources if you want to learn more about provenance : https://www.numbersprotocol.io/b...
@steffen_darwin
Amazing! This is actually the first time I heard of C2PA and CAI. I learned a great deal from you. Thank you! I hope your project could take off well and solve this problem for us