I've aired my concerns about GPT-3 a few times in various posts. So I've a few questions:
1. If the model gives a false response to the user (assuming it's customer support as in your example), then where does the responsibility lie? Obviously it's with you, but you have no comeback on the model for it's error prone response.
2. GPT-3 is in Beta, and it will be for a while. It's only by invite at the moment, when it's released (if it's release) then it's a walled garden, you don't have final control of the model. What is it going to take cost wise to create your own? Processing, running costs and training costs?
3. Would the open source version of GPT-3 perform any better than the walled garden version? Though I know it's based on more technical training data.
Thank you for writing a very good article, I appreciate that you put a lot of time into this. Thank you.