Me: I’ve cut my coffee intake down to one cup a day! Look how disciplined and restrained I am!
Also me: drinks 1.5 cans of Celsius per day
Me: I’ve cut my coffee intake down to one cup a day! Look how disciplined and restrained I am!
Also me: drinks 1.5 cans of Celsius per day
please use Bing Copilot instead of ChatGPT for this. it’s the same language model underneath, largely, but the distinction of backing replies with actual sources and citing those sources in a way that allows you to click through and check the information you’re getting is huge for a variety of important reasons.
Isn’t GPT-4o (the multimodal model currently offered by OpenAI) supposed to be able to do things like this?
Don’t get me wrong, I think you would be better served by taking this as a fun exercise to develop your imagination and writing skills. But since it’s fanfic and presumably for personal, non-commercial purposes I would consider what you want to do to be a fair and generally ethical use of the free version of ChatGPT…
There are a bunch of reasons why this could happen. First, it’s possible to “attack” some simpler image classification models; if you get a large enough sample of their outputs, you can mathematically derive a way to process any image such that it won’t be correctly identified. There have also been reports that even simpler processing, such as blending a real photo of a wall with a synthetic image at very low percent, can trip up detectors that haven’t been trained to be more discerning. But it’s all in how you construct the training dataset, and I don’t think any of this is a good enough reason to give up on using machine learning for synthetic media detection in general; in fact this example gives me the idea of using autogenerated captions as an additional input to the classification model. The challenge there, as in general, is trying to keep such a model from assuming that all anime is synthetic, since “AI artists” seem to be overly focused on anime and related styles…