
From a reader comment to an article <https://www.theregister.com/2023/02/27/microsoft_ai_gpt3_data/> wherein Microsoft suggests that, since these machine-learning models are so good at generating fake and even nonsensical responses, why not use them to deliberately generate fictional sample data for testing purposes:
For decades, we've written hundreds of sci-fi stories where you have an evil out-of-control AI, and that AI is hyperintelligent but can only reason along strict logical lines, and is ultimately defeated by exploiting its lack of imagination. And then, suddenly, it turns out that the AIs we can actually make have way too much imagination, and are utterly incapable of logical thought. I find this hilarious.
participants (1)
-
Lawrence D'Oliveiro