(May 12, 2023 at 3:27 am)Belacqua Wrote:(May 12, 2023 at 2:33 am)emjay Wrote: I'm basically kicking myself for not trying it sooner
You know, if I were still writing academic papers I think I would use this.
The hardest part in such a paper is the first draft, and then it's just a question of clarification and refinement. If the AI did the first draft for me, I could go through adding and subtracting until it was mine.
Although there is the question of where the AI gets its information from. Somehow it searches around and gets information about Swedenborg, for example -- I don't think it has such information ready in its memory. So if I was writing a paper with the ambition of adding original findings to the world, the first draft wouldn't be enough. You could use it for things like background, though. Like the already-accepted points which you're holding up as evidence for your new proposition.
Personally I'd be very cautious about publishing anything produced with it, given the concerns raised in this video:
Basically, since for instance you can't know how much of its output is synthesised vs quoted verbatim, you'd always be at risk of inadvertantly breaching someone else's copyright if it happened to quote a part of their work, and thus according to these terms and conditions, being held liable for that if they decided to sue either you or openai (as I understand it). So as much as I want to share its creative output, I find myself reluctant to on account of this, instead preferring to err on the side of caution.
Also, here's the video I mentioned earlier about how it works:
So from that, it basically appears to me to essentially be a large-scale predictive text model, whose output can be influenced by context, not just from within the latest prompt, but also from the entire conversation leading up to it. So in that sense it appears it can produce novel outputs, as constrained by the specific context, such as in Neo's case, the constraints of Swedenborg and Terriers... the combination of those two would be unique enough that the output would also in one sense be 'novel' ie how the neural network 'settles' to accomodate those particular constraints, such that a unique combination of constraints will likely settle into a similarly unique configuration of the network, but at the same time I think brewer has hit upon an interesting point in his question to Neo, though his point may be different from mine, but basically the question of how determined that output is. Ie whether the same given set of constraints will always produce the same output, or whether there's any extra variation added to that process. The fact that it ranks outputs is one sort of variation I suppose, but I wonder if there are others... ie for instance if there's any RNG involved.