OpenAI and DeepSeek have entered a dispute phase. The essence of the problem: a Chinese startup used a “distillation” technique that allows a small model to learn from the responses of a large one. This, according to OpenAI, is a direct violation of the terms of use, which prohibits copying services and creating competitors based on its data.
Now let's look at this situation in more detail.
OpenAI itself is not such a harmless lamb. The company is currently suing The New York Times and book authors for using them to train AI without the permission of the authors of these very books.
It turned out to be ironic... A corporation that accuses others of “theft” has itself found itself in an awkward position.
However, I always look at the situation from different sides. Look. For example, if language models are locked in the hands of a few giants, they will most likely cease to benefit society. Open source code would allow “popular” developers to experiment, find new solutions and move technology forward. After all, it is not always the large corporations that see the best paths of development.
But there is a flip side to this. I think that if all restrictions are completely removed, uncontrolled scaling of AI can do more harm than good. And here's why. Suddenly, someone will train a powerful model and use it for unknown purposes, then what will come of it – I think there is no need to explain.
However, the truth, as always, is somewhere in the middle. Complete closure of technologies slows down development, and complete openness creates a threat.
#OpenAi #SamAltman #ChatGPT #DeepSeek
Комментарии
Отправить комментарий