DeepSeek rocked the tech world and the financial markets when it hit the app stores a few of weeks ago, promising to provide the same kinds of high-performing artificial intelligence models as the established players like OpenAI and Google at a fraction of the cost.
But some in government and data security worry that the suddenly popular open-source AI assistant’s ties to China could put American data at risk, comparing it to the social media platform TikTok, which members of Congress overwhelmingly voted to ban last year.
Those concerns aren’t limited to DeepSeek. They’re something that everyone downloading AI chatbot apps onto their phones should bear in mind, even aside from the national security flag-waving going on in legislative halls. We’ll outline some helpful tips below.
A pair of US House members recently announced plans to introduce legislation that would ban the app on all government devices, citing the Chinese Communist Party’s ability to access data collected by DeepSeek and other Chinese-owned apps, as well as the potential for DeepSeek to be used to spread Chinese disinformation.
“This is a five alarm national security fire,” US Rep. Josh Gottheimer, a New Jersey Democrat, said in a statement, adding that the country can’t risk China being able to”infiltrate” the devices of government employees and potentially put national security at risk.
“We’ve seen China’s playbook before with TikTok, and we cannot allow it to happen again,” Gottheimer said.
Australia has banned the app on government devices. Some US states have done the same, with Texas being one of the first. New York’s governor also issued a statewide ban of DeepSeek on state government devices and systems.
Officials in South Korea took things a step further, ordering that DeepSeek be removed from the country’s app stores until improvements are made to make sure it complies with South Korea’s data protection laws.
DeepSeek’s ties to China, as well as its wild popularity in the US and the news buzz surrounding it, make for an easy comparison to TikTok, but security experts say that while the DeepSeek’s data security threats are real, they’re different from those of the social media platform.
And though DeepSeek may be the hot new AI assistant now, there’s a plethora of new AI models and versions on the horizon, making it important to take care when using any kind of AI software.
In the meantime, it’s going to be a tough sell to get the average person to avoid downloading and using DeepSeek, said Dimitri Sirota, CEO of BigID, a cybersecurity company that specializes in AI security compliance.
“I think it’s tempting, especially for something that’s been in the news so much,” he said. “I think to some degree, people just need to make sure they operate within a certain set of parameters.”
Why are people worried about DeepSeek?
Like TikTok, DeepSeek has ties to China and user data gets sent back to cloud servers in that country. Also like TikTok, which is owned by China-based ByteDance, DeepSeek is required by Chinese law to turn user data over to the government if the government asks for it.
With TikTok, lawmakers on both sides of the aisle worried that US user data could be used by the Chinese Communist Party for intelligence purposes, or that the app itself could be modified to inundate American users with Chinese propaganda. Those concerns ultimately prompted Congress to pass a law last year that would ban TikTok unless it’s sold to a buyer deemed fit by US officials.
But getting a handle on DeepSeek, or any other AI, isn’t as simple as banning an app. Unlike TikTok, which companies, governments and individuals can choose to avoid, DeepSeek is something people might end up encountering, and handing information to, without even knowing it.
The average consumer probably won’t even know what AI model they’re interacting with, Sirota said. Many companies are already running more than one kind of AI model, and the “brain,” or specific AI model powering that avatar, could even be “swapped” with another in the company’s collection while the consumer interacts with it, depending on what tasks need to be done.
Meanwhile the buzz surrounding AI in general isn’t letting up anytime soon. More models from other companies, including some that’ll be open-source like DeepSeek, are also on the way and will certainly grab the future attention of companies and consumers.
As a result, focusing on DeepSeek removes only some of the data security risks, said Kelcey Morgan, Rapid7’s senior manager of product management.
Instead of focusing on the model currently in the spotlight, companies and consumers need to figure out how much risk they want to take in regard to all kinds of AI, and put in place practices designed to safeguard data.
“That’s regardless of whatever hot thing comes out next week,” Morgan said.
Could the Chinese Communist Party use DeepSeek data for intelligence purposes?
Cybersecurity experts say China has enough people and processing power to mine the massive amounts of data collected by DeepSeek, combine it with information from other sources and potentially build profiles of American users.
“I do think we’ve entered a new era where compute is no longer the limitation,” Sirota said, pointing to the abilities of companies like Palantir Technologies, which makes software that allows US agencies to crunch vast amounts of data for intelligence purposes, and adding that China has the same kinds of capabilities.
Though the people playing around with DeepSeek could be young and relatively unimportant now, as with TikTok’s users, China is happy to play the long game and wait to see if any of them grow up to be someone of influence and worth potentially targeting, Sirota said.
Andrew Borene, executive director at Flashpoint, the world’s largest private provider of threat data and intelligence, said that’s something people in Washington, regardless of political leanings, have become increasingly aware of in recent years.
“We know that the policymakers are aware; we know the technology community is aware,” he said. “My personal assessment is I’m not sure the American consumer is necessarily aware of what those risks are, or where that data goes and why that could be a concern.”
Borene emphasized that anyone working in government should exercise the “highest levels of caution” if they choose to use DeepSeek, but he also said all users should keep in mind that their data might end up in the hands of Chinese officials.
“That’s an important factor to consider,” he said. “You didn’t need to read the privacy policy to know that.”
Keep your private information private.
How to stay safe while using DeepSeek or other AI models
Given that it can be tough much of the time to know what AI model you’re actually using, experts say it’s best to take care when using any of them.
Here are some tips for doing that.
Be smart with AI just like with everything else. The usual best practices for tech apply here, too. Set long, complicated and unique passwords, always enable two-factor authentication when you can, and keep all your devices and software updated.
Keep personal info personal. Think before entering personal details about yourself into an AI chatbot. Yes, this covers obvious no-no’s like Social Security numbers and banking information, but also the kinds of details that might not automatically set off alarm bells, like your address, place of employment, and friends’ or coworkers’ names.
Be skeptical. Just like you’d be wary of information requests that come in the form of emails, texts or social media posts, you should be concerned about AI queries, too. Think of it like a first date, Sirota said. If a model asks weirdly personal questions the first time you use it, walk away.
Don’t rush to be an early adopter. Just because an AI or app is trending doesn’t mean you have to have it right away, Morgan said. Decide for yourself how much risk you want to take when it comes to software that’s new to the market.
Read the terms and conditions. Yes, this is a lot to ask, but with any app or software, you should really read these statements before you start handing over data, to get an idea of where it’s going, what it’s being used for and who it could be shared with. Those statements could also provide insights into whether an AI or app is collecting and sharing data from other parts of your device, Borene said. If that’s the case, turn those permissions off.
Be aware of America’s adversaries. Any app based in China should be treated with suspicion, but so should those from other adversarial or ungoverned states like Russia, Iran or North Korea, Borene said. Privacy rights you might enjoy in places like the US or European Union don’t apply on those apps, regardless of what the terms and conditions say.