Lots of people get information from social media, where users can exchange messages. In most cases, content transmits without any changes. Sometimes, users modify form or content of the message before passing it on. Understanding the mechanism of message mutation could allow us to become more resistant to misinformation and better identify fake news.
We propose a simple model in which agents (users) can communicate with their neighbors by posting messages on their own page. A single message contains a negative, neutral or positive opinion on several topics. Users can transmit it as it is, create a new one, transmit with modification or ignore it. Messages are simplified to be a vector consisting of –1, 0, or 1, each component representing opinion on a specific topic. Cosine similarity between a vector of agent’s opinion and message content measures how much he agrees with it. Users do not transmit messages when the similarity is lower than a certain tolerance threshold – a model’s parameter.
We performed simulations to see how the mutation of information influences the spreading of messages in the network. It was foreseeable that when modifying messages was allowed, they could penetrate deeper into the network. However, we have observed that this happens only for a specific range of tolerance threshold, and if the threshold is too low or too high, the mutation does not play a significant role in the dynamics.
These results appear where the agent’s opinions are purely random. However, this is rarely the case in social networks, where people often seek to communicate with similar-minded individuals. To represent this, we have also performed simulations for the case where there is a varying degree of similarity between opinion vectors of connected agents.