download Download PDF

You are:

Home

INFLUENCE ACTIVITIES AGAINST LITHUANIA

Russia seeks to discredit Lithuania, increasingly accusing it of rewriting history, promoting Nazism, and spreading Russophobia

Russia seeks to discredit Lithuania, increasingly accusing it of rewriting history, promoting Nazism, and spreading Russophobia

Russia’s propaganda targeting Lithuania is centred on accusations of historical revisionism, support for Nazi ideology, and fuelling Russophobia. We assess that the Kremlin is seeking to tarnish Lithuania’s image in the international arena, to justify Russia’s geopolitical interests in the region, and to discredit Lithuania’s anti-Soviet resistance.

A large and coordinated network of Russian propaganda instruments, consisting of state-controlled media, social networks and pro-Russian websites, ensures dissemination of narratives that discredit Lithuania. We assess that Russia is most actively using the Telegram app to conduct its propaganda and disinformation campaigns. To support the dissemination of propaganda, Russia’s information policymakers use bots (automated profiles) and troll networks as well as targeted advertising. Disinformation campaigns typically follow a tried-and-tested model: initial provocative information is posted on an obscure website or social media account, and then disseminated by prominent Telegram channels with high follower counts, thereby amplifying its reach. To reach a wider audience, the information can be translated into several languages. Such information operation was conducted in May 2024, when pro-Russian Telegram channels disseminated fake information claiming that two individuals known for their anti-NATO stance had been allegedly detained in Lithuania on 9 May 2024. This information was subsequently circulated on Russian news websites and pro-Russian Lithuanian websites.

A key element of Russia’s information policy against Lithuania is portraying it as one of the most Russophobic countries in Europe. This narrative is mainly based on the assertion that Lithuania discriminates against Russian-speaking people and seeks to marginalise Russia’s supporters, irrespective of their nationality. Russia draws parallels between this alleged Russophobia and genocide, suggesting that Russians in Lithuania and the other Baltic States are currently experiencing what Jews did during the Second World War. We assess that Russia likely accuses Lithuania of discriminating against Russian speakers to attract the attention of international human rights organisations and the support of countries highly sensitive to ethnic, cultural, and linguistic issues.

Maria Zakharova, the Russian Foreign Ministry spokeswoman, is known for making accusations against the Baltic States of alleged prosecution of Russian speakers. IMAGO / SNA / Scanpix

The manipulation of historical memory constitutes an important element of the Kremlin’s confrontational policy. The Kremlin supports historical initiatives that discredit Lithuania and the other Baltic States, thereby legitimising history interpretations that serve Russia’s interests. These initiatives aim to disseminate a narrative that accuses the Baltic States of systematically distorting the Second World War history, promoting Nazi ideology, and glorifying Nazi collaborators. The initiative ‘With no Statute of Limitations’ running for several years and aimed at the Russian public, supports this narrative. Under the auspices of this initiative, Kremlin-linked institutions implement various projects that supposedly expose the Baltic States as history revisionists.

In 2024, as Russia commemorated the 80th anniversary of the so-called liberation of Europe from Nazi Germany occupation, the Kremlin widely implemented various projects discrediting Lithuania and its anti-Soviet resistance. In 2025, when Russia commemorates the 80th anniversary of the end of the Second World War, the number of Russian history policy and propaganda initiatives against Lithuania and the Baltic States will increase.

Artificial intelligence tools are used to generate misleading content

Generative AI tools, including deepfakes (video, images, audio) and text generation, offer the potential to create digital content faster, more cost-effectively, and with less human intervention.

AI tools intended for content generation are not necessarily regarded as a threat. However, they can accelerate dissemination of disinformation and discredit on political systems, media, or electoral processes. Until 2024, the impact of AI-generated content was relatively limited, but with ongoing technological developments AI-generated disinformation highly likely will become more effective and harmful.

Deepfakes generate reality-distorting images, photos, and videos. They can be used to create false impressions of political figures, for example, by making them appear to slander others or promote ideas that do not align with their actual agenda. These tools are easy to use, and the identities of their authors are easily concealed.

Video and image deepfake

Deepfakes generate reality-distorting images, photos, and videos. They can be used to create false impressions of political figures, for example, by making them appear to slander others or promote ideas that do not align with their actual agenda. These tools are easy to use, and the identities of their authors are easily concealed.

Audio deepfake

Voice deepfakes, which imitate the voice of family members or celebrities, are very convincing and almost undetectable. Any voice discrepancies in audios generated by AI tools can be explained by communication interference or poor sound quality. Voice cloning is most often used for generating unrealistic content in phone calls, radio recordings, and interviews.

Chatbots and text generation

AI-generated chatbots, such as ChatGPT, can be used for creating texts in foreign languages and adapting them to different target audiences. They can be employed for writing fictitious social network posts, creating personal profiles or cover stories, and for engaging in virtual conversations with social media bots.


China-linked bot accounts on the social network X aimed at influencing the 2024 US presidential elections while Joe Biden was still running for president. These accounts spread disinformation on a range of issues, including migration policy, racial discrimination, and other topics.


Print print
+
Cookie settings
Mandatory
Mandatory cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookie.
Functional
Functional cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in. Functional cookies are currently unused.
Statistical
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously. Statistical cookies are currently unused.
Allow all cookies Reject all