tasks. Current models are not going to invent new science. Even the copilot code things are just memorizing prediction across github and stack overfllow, dont' think I'd turn my entire development team over to that, as a tool sure, but not the end all be all Transformers rarely get you all the way to what you want usually just in the direction and don't get me started on why transformers are so bad at discreate tasks. The problem is they are vector functions from a continuous distribution (think high dimensional curve) they don't do well for discrete (non continuous) tasks like soriting a list or doing math. It will just work if it happens to have been trained on a similar worded problem that it can interpolate and predict the next words from.