Seems like the easiest alternative to self-hosting a full blown GPT engine without actually getting into the technical nitty gritty. Not only this saves a lot of headache in LLM configuration, I would even say we can save time in data training. I'll give this a shot.