MIT does not recommend usage of the Deepseek AI assistant.
MIT does not recomment the use of Deepeek, primarily for two reasons:
National Bias
All Generative AI tools have some sort of national bias, because of the nature of where they draw their information from, and the nationality
of the content creators. However Deepseek appears to have intentional bias, in that certain topics are banned from providing information because of specific government policy. It is difficult to judge the overall impact and spread of this policy, but where intentional bias as a policy is part of a tool, it becomes unreliable in terms of providing valid information.
Privacy concerns
The Australian government has raised concerns about Deepseek and privacy concerns. DeepSeek's own privacy policy indicated it collects personal information from users and stores them on secure servers in China. This information may include personal data including phone numbers, date of birth, email addresses and other information including IP addresses.
Other Generative AI tools may collect personal data as well, so any users of generative AI should be wary of what personal data is made available to chat tools when they are used.
More reading:
Be careful with DeepSeek, Australia says - so is it safe to use? - BBC
Dutch privacy watchdog to launch investigation into China's DeepSeek AI - Reuters
'Be careful': Australian ministers urge caution over AI app DeepSeek - SBS
Here's How Deepseek Censorship Works -Wired
Lecturers fear Impact of DeepSeek on Student's work - Times Higher Education
Chinese Chatbot censors itself in realtime - The Guardian
We tried out DeepSeek. It worked well, until we asked it about Tiananmen Square and Taiwan - Guardian