Introduction
Recent developments in technology, such as social bots equipped with artificial intelligence and supported by techniques such as machine learning and deep learning, have taken control of the process of information sharing over networks. These social bots also play an active role in the process of creating and spreading information across online networks.1 Because they are more concerned with feelings than with facts, such intelligent algorithms place their emphasis on the level of user involvement (clicks, comments, shares, etc.), but they do not consider the characteristics of the information itself, such as whether it is truthful or untrue, pro- or anti-social, or if it is beneficial or harmful for society. They search for information that stirs up emotions in people as part of their effort to engage those users, but they frequently come to the realization that facts are not what drive online dialogues.1 Because information on the internet is controlled in a decentralized manner, in part by individuals, in part by organizations (companies operating on social media/news/other platforms), it has become increasingly difficult to govern data on the internet. In addition, on social media platforms, where users are primarily responsible for the generation of information, it has become a Sisyphean effort for regulators and social media platforms to oversee the creation of information and the flow of information across the networks. This can lead to the rapid spread of misinformation (incorrect or misleading information which is not intentionally created) and disinformation (incorrect or misleading information; or false information that is deliberately created and deceptive; or the intentional spread of misinformation)1–3 over social media and other online networks, which can have a significant impact on the organizations or society in question, as it can lead to the consolidation of power in a small number of information management and social networking platforms such as Google, Facebook, Twitter, WhatsApp, and so on.4,5 This topic has developed into a source of contention between governments and social networking and information management platforms like Facebook about concerns regarding privacy, the inappropriate use of information, and the protection of users’ personal data.6,7 Regulating online platforms to protect human rights and prevent political abuse is relatively new, and only a small number of nations have implemented new policies and regulations in this area.8 In addition, several key participants, including Facebook, Google, Twitter, WhatsApp, and Share Chat, came to an agreement to abide by the social media code of ethics in order to avoid the inappropriate use of information.9 It is believed, however, that this approach is unlikely to be able to contain the spread of fake news (information that is incorrect or misleading) from social media platforms because more responsibility falls on the users and their conduct than it does on the platform providers.10
Disinformation about various aspects can be observed online, such as fake news to influence stock markets, politics, etc. However, health-related disinformation is one of the serious aspects which can significantly impact on the society. An increase in the incorrect interpretation of scientific knowledge, the polarization of opinions, the heightening of fear and terror, or reduced access to health treatment are all examples of potential negative outcomes of health-related disinformation.11 Recent studies12–15 have observed that misinformation on social media platforms considered in these studies identified 51% posts associated with vaccines, 28.8% associated with Covid-19, 60% posts associated with pandemics; and about 20–30% of YouTube videos about emerging infectious diseases contain inaccurate information. Studies16–22 also observed various impacts of health disinformation on the patients, which include misunderstanding of symptoms and conditions; delayed or inappropriate treatment; false sense of security; misinformation about treatments; and worsening of mental health. Online health disinformation not only impacts patients but also social, technical, economic, legal, and environmental aspects of the society.11
However, there is a lack of research11 focusing on assessing the impact of online health disinformation in a broader context, as most of the studies limited to the impact on patients.12–15 Researching online health disinformation and misinformation in Saudi Arabia is of paramount importance as it directly impacts public health, erodes trust in healthcare systems, and poses significant challenges during pandemics. Understanding the cultural and linguistic nuances specific to Saudi Arabia is crucial for crafting effective interventions to counteract misinformation. Furthermore, Saudi Arabia’s Vision 2030 outlines a comprehensive and ambitious plan for the transformation of various sectors in the country, including health care. The healthcare goals and initiatives within Vision 203023 are designed to enhance the quality of healthcare services, improve access to health care, and promote the overall well-being of Saudi citizens by leveraging innovative technologies such as Artificial Intelligence, Machine Learning, and Cloud-based systems to achieve sustainable healthcare operations. However, previous studies in the context of Saudi Arabia focusing on health-related disinformation are relatively low, signifying the need for extensive research in this area. Furthermore, the country’s actual action to improve public health remained relatively low. In the evolving technological environment, online health disinformation can significantly affect public health,23 as observed during the recent Covid-19 pandemic.12 In this context, this study aims to analyze the public perceptions and attitudes towards online health information; investigate the impact of online health disinformation on the society, and propose a framework for addressing the issue of online health disinformation. This research not only informs policy development and regulation but also enables tailored educational programs to enhance digital and health literacy. By identifying vulnerable populations and fostering international collaboration, such research supports data-driven decision-making and ultimately contributes to the protection of individuals’ health and well-being in the Saudi Arabian context.
Methods
As this study focuses on analyzing the public perceptions about online health disinformation and the impact of online health disinformation on the society, there is a need to collect different types of data from diverse populations. Therefore, a mixed-methods approach design is used in this study which includes a cross-sectional survey design to gather data from public; and a semi-structured interviews approach with healthcare experts to analyze the impact of online health disinformation on the society.
Study Settings and Participants
Semi-structured interviews facilitate a conversation between at least two people about a topic of shared interest that facilitates interpretational agreements/disagreements and the exchange of perspectives.24 Semi-structured interviews are a format that researchers or investigators may select because they provide them the opportunity to pre-formulate the questions and add more pertinent ones later in the process of questioning.24,25 Quantitative surveys, on the other hand, are an efficient way to collect the views of a large number of individuals, which can help with the process of generalizing the findings across larger populations.26 The findings of the quantitative survey in this study were used to generalize the findings about the public perceptions of online health information, and the results of the semi-structured interviews were used to examine experts viewpoints on the impact of online health disinformation. As this study focuses on assessing the public perceptions about online health-related disinformation, which is achieved through an online survey questionnaire, the target population included all the residents aged above 18 years in Saudi Arabia. However, assessing the impact of online health disinformation requires participants who have expertise in eHealth. Therefore, for semi-structured interviews, target population included a diverse participants group including physicians involved in eHealth or online consultations, academic professors, and eHealth security experts.
Instruments
The survey questionnaire was designed by authors in consultation with the academic experts in eHealth. The survey questionnaire has two sections; the first section focuses on collecting the participants demographic information, which includes gender, age, education, and employment status. The second section of the questionnaire includes 20 multiple-choice questions related to the online health information usage. The second part of the questionnaire focused on the frequency of the internet usage, user evaluation approaches of online health information, concerns over online health disinformation, and users’ opinions on the effectiveness of various approaches for preventing or mitigating the impact of online health disinformation, identified from the previous studies.27–29 A copy of the questionnaire is provided in Appendix A. A pilot study is conducted with twelve (six 3rd year bachelor’s degree students from health informatics course and six 2nd year bachelor’s degree students from commerce group), which followed with a discussion on the quality of the questionnaire. The questionnaire was observed to be understandable and easy to follow by all the students.
The interview questionnaire was designed by authors in consultation with the academic experts in health informatics. The questionnaire initially had four questions assessing the impact of online health disinformation from social, technical, legal, and environmental perspectives. Later, an additional question focusing on the strategies that can be used by the public to safeguard themselves from health disinformation was included.
Recruitment and Sampling
Cochran’s formula30 was used to calculate the required sample population for survey, which resulted in 385 estimated samples. For interviews, the estimated sample population is 25, as suggested in Dworkin’s study,31 where a good sample for qualitative interviews must range between 20 and 30 interviews. Snowball sampling technique was adopted for surveys, as the participants were requested to forward the invitation message to their colleagues, friends, and family members26 in order to increase the participants. For interview participants, a purposive sampling technique32 was adopted to select the required participants who are specialized in eHealth and health informatics.
Data Collection
The online survey questionnaire was created using Google forms, and the survey link was forwarded to the author’s friends and colleagues using social media applications (WhatsApp, Facebook, LinkedIn). For interviews, an invitation email is mailed to the specific participants who included academic professors and health information experts. The interviews were conducted online using Zoom application in English language as per the agreed timeline from the interested participants. On an average, the interviews lasted for 40 to 60 minutes. Informed consent was taken through online forms for interview participants and through online questionnaire for survey participants.
Data Analysis
As the survey data mostly included descriptive statistics, the results were analyzed using frequencies and means, and are explained descriptively. The recorded interviews were converted into interview transcripts (text documents) using the NVivo software. The interview data were analyzed using33’s suggested framework for thematic analysis. Initially, distinct codes that emphasize specific information were discovered in each interview transcript. The codes were then classified into themes based on their similarities, which were then used to analyze the results.
Ethical Considerations
The study was given ethical approval by the Imam Abdulrahman Bin Faisal University’s research ethics committee. Both the procedure of collecting data and analyzing it were carried out in a manner that was compliant with all applicable ethical norms. Full disclosure was made regarding both the objective of the investigation and the legal rights for those who took part. Each participant in the interview was given a pseudonym in order to preserve their privacy and maintain their anonymity. Every participant provided their informed consent before the start of the surveys and interviews, and their involvement was fully voluntary throughout the entire process.
Results
Survey Results
A total of 447 responses were received for the survey. However, 38 responses were incomplete, and were not considered for the data analysis. As a result, 409 responses were considered for the data analysis. The demographic profile of the participants is presented in Table 1.
Table 1 Participants Demographics |
Most of the participants used the internet frequently, ie, one or more times in a week (57.2%) and one or more times in a day (9.5%) to search health information. Based on the participants’ response when they encounter online health disinformation (see Table 2), majority of them changed their behaviors and beliefs (72.3%), and most of them shared the disinformation (78.5%). It is also evident that majority of the participants (68.9%) observed online health disinformation, and about 67.6% of the total participants experienced negative consequences of online health disinformation. It is interesting to observe that about 73.4% of total participants avoided seeking medical treatment or advice based on information you found online. However, only few participants (37.4%) were exposed to online health-related disinformation that was targeted specifically at certain groups or individuals. More than 80% of the participants believed that health-related disinformation is a significant public health issue, and online platforms have a responsibility to prevent the spread of health-related disinformation. But on the public front, 55.8% of the participants never questioned the accuracy or reliability of health information provided by a healthcare provider, and 78.6% of total participants never reported health-related disinformation they found online.
Table 2 Participants Response to Online Health Disinformation |
It is also observed that majority of the participants were extremely concerned (44.3%) and very concerned (39.7%) about the presence of online health-related disinformation; while only few participants (3.4%) were not at all concerned. In relation to the responsibility of online platforms to fact-check health-related information before allowing it to be shared online, most of the participants observed it to be extremely important (54.8%) and very important (39.6%). Further expanding the responsibility of online platforms in educating the public on evaluating the online health information (Figure 1), offering educational resources (91.7%), tailoring educational efforts to different demographics (87.5%), providing fact checking resources (83.9%), incorporating user feedback mechanisms (83.9%), and promoting reliable sources (81.4%) were identified to be the most preferable approaches.
Figure 1 Online platforms efforts in educating the public on evaluating the online health information. |
In relation to the steps that can be taken to prevent or mitigate the impact of online health-related disinformation (see Table 3), improving public health communication (91.4%), strengthening media literacy and critical thinking skills among public (87.3%), supporting fact checking organizations (78.9%), empowering healthcare professionals in disseminating health information (76.5%), and enhancing digital platform responsibility (75.1%) were identified to be the most preferred approaches.
Table 3 Steps to Prevent or Mitigate the Impact of Online Health-Related Disinformation |
In observing how participants evaluate online health information (see Figure 2), it is observed that majority of the participants consider publication or website reputation (89.4%), consistency of information with their established knowledge (81.6%), and relying on their critical thinking skills (72.8%).
Figure 2 User-centric approaches for evaluating online health information. |
However, only few participants prefer other effective approaches like examining authors credentials (39.7%), cross-referencing and corroboration (41.5%), and source evaluation (48.3%).
Table 4 presents further analysis of the differences in the perceptions of different user-centric approaches for evaluating health information among different participant groups. It can be observed that a greater number of male participants observed “expertise and author credentials” and “source evaluation” as better approaches, while female participants perceived “consistency with established knowledge” as better approach compared to their counterparts.
Table 4 Perceptions of User-Centric Approaches Among the Participant Groups |
Furthermore, a greater number of younger participants perceived all the actions to be effective compared to older participants. Similar observations were made between the bachelor’s and master’s degree students, and diploma students. These findings indicated that age and education can influence the users’ perceptions about the use of various online health information evaluation techniques.
Interview Results
A total of 22 interviewees participated in the semi-structured interviews, who included physicians (8), academic professors (9), and five health informatics experts working in an IT firm. Thematic analysis of interview data resulted in 32 codes and 97 sub-codes, which are grouped under four themes related to the impact of online health disinformation, and one theme related to the strategies to be adopted by the public in minimizing the impact of online health disinformation, which are presented in the following sections.
Social Impact
Seven sub-themes reflecting the social impact were analyzed. These include
- Erosion of trust: Online health disinformation undermines trust in institutions, experts, and authoritative sources of information. In this context, one of the interviewees stated that
when people encounter conflicting or false health information online, it can lead to skepticism and a diminished trust in healthcare professionals, scientific research, public health agencies, and traditional media outlets.
disinformation can lead to increased tribalism, hostility, and an ‘us versus them’ mentality, hindering constructive dialogue and collaboration on important health issues. For example, use of animal fats in Covid-19 vaccine became an issue, which has led to the disruptions in vaccination among some communities.
disinformation can undermine disease control measures, such as vaccination campaigns, by spreading doubts about vaccine safety and effectiveness. This can impede efforts to achieve herd immunity and prevent the spread of infectious diseases, leading to prolonged outbreaks and increased healthcare burden.
Economic Impact
Five sub-themes reflecting the economic impact of online health disinformation were identified from the analysis of interviews. These include the following:
- Increased healthcare costs: Interviewees observed that when individuals are misled by false information and pursue ineffective or unnecessary treatments, it can lead to overutilization of healthcare resources. This places a strain on healthcare systems, resulting in higher costs for individuals, insurance providers, and governments.
- Wasteful spending: Misleading claims about health products, treatments, or remedies can exploit individuals seeking relief or solutions. People may spend money on ineffective or unproven remedies, dietary supplements, or alternative therapies promoted through disinformation. This can result in wasteful spending and financial loss for individuals and their families.
- Economic exploitation: Online health disinformation can be used for economic exploitation, particularly through fraudulent or deceptive practices. Unscrupulous individuals or organizations may capitalize on false health claims to market and sell products or services that offer little to no benefit. This can lead to financial exploitation of vulnerable individuals seeking improved health outcomes. In this context, one of the interviewees stated that
I often come across online ads about hair fall making false claims about its results and the ingredients, but those who are unaware of it may be tempted to purchase such medicines.
Technical Impact
Six sub-themes reflecting the technical impact of online health disinformation were identified from the analysis of interviews. These include the following:
- Algorithmic amplification: Online platforms and social media algorithms can inadvertently amplify health disinformation. Algorithms designed to maximize user engagement and retention may prioritize and promote content based on user preferences and behaviors, potentially leading to the spread of false health information. This can create echo chambers and filter bubbles that reinforce disinformation.
- Viral spread and reach: Online health disinformation can spread rapidly and reach a wide audience due to the nature of social media and online communication channels. False claims, conspiracy theories, or sensationalized headlines can go viral, gaining significant attention and circulation. This can lead to a widespread impact on public perception and beliefs.
- Manipulation of search results: Individuals searching for health information online may encounter misleading or false content prominently displayed in search results. Manipulation of search engine optimization techniques and the use of deceptive practices can result in the prominence of disinformation in search rankings, making it harder for users to access accurate information.
- Creation of echo chambers: Online platforms may inadvertently foster the creation of echo chambers, where individuals are exposed only to information that aligns with their existing beliefs and biases. This can contribute to the reinforcement of health disinformation, as like-minded individuals reinforce and amplify false narratives, limiting exposure to diverse viewpoints and accurate information.
- Creation of disinformation networks: Online health disinformation may give rise to networks and communities dedicated to the spread of false information. These networks may leverage various technological tools, such as automated bots, fake accounts, or coordinated campaigns, to amplify their messages and manipulate public perception.
- Impact on data privacy: Online health disinformation may have implications for data privacy. Misleading health information may lure individuals into sharing personal information or engaging in potentially harmful practices, jeopardizing their privacy and security.
Legal Impact
Four sub-themes reflecting the legal impact of online health disinformation were identified from the analysis of interviews. These include the following:
- Intellectual property infringement: Online health disinformation may involve the unauthorized use of trademarks, copyrights, or other intellectual property. If false health information is disseminated using someone else’s protected intellectual property, it may lead to legal actions for infringement.
- Regulatory violations: Health disinformation can run afoul of specific regulations governing the healthcare industry. For instance, making false claims about the safety, efficacy, or benefits of healthcare products, drugs, or medical devices may violate regulatory standards and result in legal consequences.
- Fraudulent practices: Online health disinformation that involves fraudulent practices, such as promoting fake cures or miracle treatments, can lead to legal actions for fraud. Individuals or organizations engaging in deceptive practices for financial gain may face criminal charges or civil lawsuits.
- Violation of privacy laws: Online health disinformation that involves the unauthorized disclosure of personal health information may violate privacy laws. Sharing sensitive health information without consent or in a misleading manner may result in legal liabilities.
Strategies
Strategies to prevent the impact of online health disinformation were analyzed from the interview data through three sub-themes, which include the following:
- Verify information from credible sources such as government health agencies, reputable medical organizations, and established healthcare institutions.
- Develop media literacy skills to critically evaluate and analyze health information encountered online. This involves questioning the source, checking for bias, assessing the credibility of the author or website, and cross-referencing information with multiple reliable sources.
- Seek guidance from healthcare professionals such as doctors, nurses, or pharmacists, for accurate and personalized health information.
- Be cautious of anecdotal evidence such as personal testimonials when making health decisions. Individual experiences may not reflect broader scientific consensus or evidence-based practices.
- Self-learning about digital literacy and online information evaluation techniques to identify disinformation, understand common techniques used in misleading content, and differentiate between credible sources and unreliable platforms.
- Staying updated with current research and scientific advancements in health care by referring to trusted scientific journals, medical publications, and reputable research institutions to access accurate and up-to-date information.
- Actively participate in reporting false health information encountered online. Most social media platforms and search engines have mechanisms to report disinformation.
- Engage in fact-checking websites and organizations that can help verify the accuracy of health claims and debunk false information.
- Educate others by sharing knowledge and awareness about health disinformation with friends, family, and community members.
- Promote media literacy and education by implementing comprehensive media literacy programs in schools and educational institutions to equip individuals with the skills to critically evaluate health information.
- Enhance public health communication by providing accurate, accessible, and timely information through clear communication channels such as official websites, social media platforms, and dedicated hotlines.
- Collaborate with technology companies to develop and implement measures that address health disinformation.
- Develop and enforce regulations that address health disinformation while balancing freedom of expression. Establish clear guidelines for online platforms, social media companies, and content creators regarding the responsible dissemination of health information.
- Support fact-checking initiatives by independent fact-checking organizations focused on health-related disinformation through funding and allocation of resources. Collaborate with fact-checkers to verify and debunk false health claims, and promote their findings through official channels and public awareness campaigns.
- Collaborate with international organizations, governments, and stakeholders to develop coordinated strategies for addressing health disinformation.
- Allocate resources for research and monitoring initiatives focused on health disinformation. Foster collaborations between academia, research institutions, and public health agencies to generate evidence-based insights and develop effective strategies.
- Collaborate with healthcare professionals and medical associations to ensure accurate health information reaches the public. Provide platforms for medical experts to communicate directly with the public, address common misconceptions, and clarify misleading information.
- Encourage transparency and accountability from online platforms and technology companies in their content moderation policies and algorithms.
- Launch public awareness campaigns to educate citizens about the risks of health disinformation and provide guidance on evaluating health information.
- Develop official channels of communication, such as websites, social media accounts, and newsletters, to directly provide accurate and evidence-based health information to the public.
- Provide ongoing education and training to healthcare professionals about identifying and addressing health disinformation.
- Create patient education materials that address common health misconceptions, debunk myths, and provide accurate information. Ensure that these materials are accessible, user-friendly, and available in multiple languages.
- Collaborate with reputable sources such as government health agencies, academic institutions, and medical associations to ensure the dissemination of accurate health information.
- Utilize social media platforms, online forums, and other digital channels to engage with the public and provide accurate health information.
- Develop systems to monitor and identify instances of health disinformation that circulate online. Actively address disinformation by providing accurate information as a response, collaborating with fact-checking organizations, or reporting false information to online platforms for removal.
- Encourage healthcare professionals to actively participate in countering health disinformation by using their expertise and credibility.
- Invest in research that examines the impact of health disinformation on public health and identifies effective strategies to combat it.
- Establish relationships with media organizations to ensure accurate reporting and responsible journalism on health-related topics.
- Participate in policy discussions and advocate for regulations that address health disinformation.
Discussion
The findings from both surveys and interviews have clearly addressed the aim of this study, which is discussed in this section. Firstly, focusing on the public perceptions and engagement with online health disinformation, it can be observed that online health disinformation is frequently observed by the public and it can significantly influence their behaviors. It can be observed that most of the participants modified their behaviors and shared disinformation they came across with others, resulting in quick spread of disinformation. Moreover, nearly half of the survey participants did not question the accuracy and reliability of the online health information, and nearly two-thirds of the survey participants never reported health-related disinformation they encountered. These findings can be related to the studies12–15 that have observed more than 50% of the posts on social media platforms about Covid-19 providing disinformation, reflecting the quick spread of disinformation due to lack of awareness and poor digital literacy among the users. This is also evident from the approaches used by the participants for evaluating online health information, where important techniques such as source evaluation, examining authors credentials and quality of references, and cross-referencing and corroboration were not used by most of the participants. This is also supported from the survey results, where more than two-thirds of the participants preferred improved public health communication, improving literacy and critical thinking skills among the public as the mean to prevent or mitigate the impact of online health disinformation. Focusing on the steps to be taken, participants preferred self-development and learning skills as more important approach than relying on the online platforms and other organizations for taking responsibility. Thus, the survey findings reflect the low digital literacy skills in identifying the online health disinformation among the participants and highlight the need for user-centered interventions like self-learning, providing support for education, rather than relying on external entities like online platforms for preventing the impact of online health disinformation.
Focusing on the impact of online health disinformation, the findings identified four areas of impact including social, economic, technical, and legal aspects. Considering the social impact, the most significant impact was observed in relation to the erosion of trust in healthcare institutions among the public, which may directly affect public health and health policies leading to the disruptions in public health efforts like welfare schemes, vaccinations, etc., which were observed in.13,34,35 In addition, the social stigma, polarization, and discrimination of certain communities may be possible due to health disinformation as observed in.36,37 Moreover, as observed in the findings, health disinformation can also affect mental health leading to stress and anxiety.38,39 Focusing on the economic impact, it was observed that online health disinformation can lead to increase in healthcare costs for individuals, insurance providers, and governments, wasteful spending, and economic exploitation. Furthermore, such disinformation can have significantly impact economies of the industries40 like pharmaceutical companies and other health industries. Focusing on the technical impact, interesting findings have been observed such as algorithmic amplification of disinformation by artificial intelligence and machine learning applications, the adoption of which is rapidly increasing in the past few years. This may amplify the spread of online health disinformation if not regulated. Regulatory violations and violation of privacy laws can be possible if the disinformation involves the unauthorized disclosure of personal health information.
Focusing on the strategies, several measures were identified from the findings, and the most prominent include development of digital literacy skills and self- and continuous learning, which is also supported by the participants in the survey. In addition, using fact-checking websites and reporting the disinformation to the authorities is an important finding observed which can prevent the spread on disinformation. Similarly, strategies such as launching health and awareness campaigns,41 enforcing regulations, and providing incentives to collaborators, improving communication channels, increased role of physicians and healthcare professionals in disseminating correct information are few strategies identified for government and healthcare institutions which were similar to the recommendations provided in.42–45 Based on the identified strategies, the framework for online health disinformation management is proposed in Figure 3, outlining the strategies for three key stakeholders including public, governments, and healthcare institutions.
Figure 3 Framework for managing online health disinformation. |
The findings from this study can have both theoretical and practical implications. The finding in this study contributes to the lack of literature related to the online health disinformation, its impacts, and the strategies to manage disinformation. This study can also guide the researchers in expanding the area of research related to online health disinformation from various perspectives such as impact on various stakeholders, roles of stakeholders, and policy perspectives. In addition, the findings from this study can aid public, decision-makers, healthcare authorities, and institutions in developing policies and strategies aimed at managing online health disinformation. However, there are few limitations that are identified from the study. Firstly, the study used survey instrument to gather data from public from only one region (Eastern Province of Saudi Arabia), as a result of which the findings must be generalized with care. Secondly, the strategies suggested through framework are based on the findings and may not be generalized across all the regions. Thirdly, a bias in quantitative results may exist due to the use of snowball sampling methods. Necessary modifications must be made in line with the regional demographic conditions.
Conclusion
The purpose of this study is to analyze the public perceptions and attitudes towards online health information, investigate the impact of online health disinformation on the society, and propose a framework for addressing the issue of online health disinformation. Accordingly, the results identified low digital literacy levels and lack of critical analysis skills among the public in Saudi Arabia, which is resulting in poor identification of health disinformation and contributing to quick dissemination over online networks. Furthermore, age and education played a significant role in determining the online health information evaluation approach, indicating the demographic differences and highlighting the need for developing strategies by governments specific to different demographic populations. Moreover, significant impact of online health disinformation was observed in four aspects including social, economic, technical, and legal fronts. Considering the seriousness of the impact, there is an immediate need to adopt effective and efficient framework for managing online health disinformation in order to improve public health and for ensuring smooth functioning of healthcare organizations.
Disclosure
The author reports no conflicts of interest in this work.
References
1. Jussila J, Suominen AH, Partanen A, Honkanen T. Text analysis methods for misinformation–related research on Finnish language twitter. Future Internet. 2021;13(6):157. doi:10.3390/fi13060157
2. Petratos P. Disinformation, disinformation, and fake news: cyber risks to business. Bus Horiz. 2021;64(6):763–774. doi:10.1016/j.bushor.2021.07.012
3. Caramancion K An exploration of disinformation as a cybersecurity threat. In
4. Faris R, Donovan J. The future of platform power: quarantining disinformation. J Demo. 2021;32(3):152–156. doi:10.1353/jod.2021.0040
5. Moore M, Tambini D. Digital Dominance.
6. Durkee A Federal government sues Facebook—again—after court struck down first lawsuit. Available from: https://www.forbes.com/sites/alisondurkee/2021/08/19/federal-government-sues-facebook-again-after-court-struck-down-first-lawsuit/?sh=a1d4daf30e99.
7. Guo E Facebook is now officially too powerful, says the US government. Available from: https://www.technologyreview.com/2020/12/09/1013641/facebook-should-be-broken-up-says-us-government/.
8. EU: Put fundamental rights at top of digital regulation. Available from: https://www.hrw.org/news/2022/01/07/eu-put-fundamental-rights-top-digital-regulation.
9. Chaturvedi A Social media code of ethics unlikely to tame fake news, say analysts. Available from: https://economictimes.indiatimes.com/tech/internet/social-media-code-of-ethics-unlikely-to-tame-fake-news-say-analysts/articleshow/68531164.cms?from=mdr.
10. Barrett-Maitland N, Lynch J. Social media, ethics and the privacy paradox. In: Security and Privacy from a Legal, Ethical, and Technical Perspective. IntechOpen; 2020. doi10.5772/intechopen.90906
11. Borges Do Nascimento IJ, Pizarro AB, Almeida JM, et al. Infodemics and health misinformation: a systematic review of reviews. Bull World Health Organ. 2022;100(9):544–561. doi:10.2471/BLT.21.287654
12. Gabarron E, Oyeyemi SO, Wynn R. COVID-19-related misinformation on social media: a systematic review. Bull World Health Organ. 2021;99(6):455–463A. doi:10.2471/BLT.20.276782
13. Suarez-Lledo V, Alvarez-Galvez J. Prevalence of health misinformation on social media: systematic review. J Med Internet Res. 2021;23(1):e17187. doi:10.2196/17187
14. Tang L, Bie B, Park S-E, Zhi D. Social media and outbreaks of emerging infectious diseases: a systematic review of literature. Am J Infect Control. 2018;46(9):962–972. doi:10.1016/j.ajic.2018.02.010
15. Wang Y, McKee M, Torbica A, Stuckler D. Systematic literature review on the spread of health-related misinformation on social media. Soc Sci Med. 2019;240:112552. doi:10.1016/j.socscimed.2019.112552
16. Ahmad A, Murad H. The impact of social media on panic during the COVID-19 Pandemic in Iraqi Kurdistan: Online Questionnaire Study. J Med Internet Res. 2020;22(5):e19556. doi:10.2196/19556
17. Nan X, Wang Y, Thier K. Why do people believe health misinformation and who is at risk? A systematic review of individual differences in susceptibility to health misinformation. Soc Sci Med. 2022;314:115398. doi:10.1016/j.socscimed.2022.115398
18. Ghenai A Health misinformation in search and social media. In
19. Rocha YM, de Moura GA, Desidério GA, de Oliveira CH, Lourenço FD, de Figueiredo Nicolete LD. The impact of fake news on social media and its influence on health during the COVID-19 pandemic: a systematic review. J Public Health (Bangkok). 2021;1–10. doi:10.1007/s10389-021-01658-z
20. Thompson TL, Harrington NG. The Routledge Handbook of Health Communication. New York: Routledge, Taylor & Francis Group; 2022.
21. Pulido CM, Ruiz-Eugenio L, Redondo-Sama G, Villarejo-Carballido B. A new application of social impact in social media for overcoming fake news in health. Int J Environ Res Public Health. 2020;17(7):2430. doi:10.3390/ijerph17072430
22. Greenspan RL, Loftus EF. Pandemics and infodemics: research on the effects of misinformation on memory. Human Behav Emer Technol. 2020;3(1):8–12. doi:10.1002/hbe2.228
23. Rahman R, Al-Borie HM. Strengthening the Saudi Arabian healthcare system: role of Vision 2030. Int J Healthcare Manage. 2020;14(4):1483–1491. doi:10.1080/20479700.2020.1788334
24. Cohen L, Manion L, Morrison M. Research Methods in Education.
25. Brinkmann S. Unstructured and semi-structured interviewing. Oxford Handbook Qual Res. 2014;2:277–299.
26. Saunders M, Lewis P, Thornhill A. Research Methods for Business Students. Essex: Pearson Education Limited; 2016.
27. Sun Y, Zhang Y, Gwizdka J, Trace C. Consumer evaluation of the quality of online health information: Systematic Literature Review of Relevant Criteria and Indicators. J Med Internet Res. 2019;21(5):e12522. doi:10.2196/12522
28. Ma T, Atkin D. User generated content and credibility evaluation of online health information: a meta analytic study. Telema Inform. 2017;34(5):472–486. doi:10.1016/j.tele.2016.09.009
29. Nsoesie EO, Oladeji O. Identifying patterns to prevent the spread of misinformation during epidemics. Harvard Kennedy School Misinform Rev. 2020;1(3):1–9. doi:10.37016/mr-2020-014
30. Woolson RF, Bean JA, Rojas PB. Sample size for case-control studies using Cochran’s statistic. Biometrics. 1986;42(4):927. doi:10.2307/2530706
31. Dworkin SL. Sample size policy for qualitative studies using in-depth interviews. Arch Sex Behav. 2012;41(6):1319–1320. doi:10.1007/s10508-012-0016-6
32. Campbell S, Greenwood M, Prior S, et al. Purposive sampling: complex or simple? Research case examples. J Res Nurs. 2020;25(8):652–661. doi:10.1177/1744987120927206
33. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101. doi:10.1191/1478088706qp063oa
34. Muhammed TS, Mathew SK. The disaster of misinformation: a review of research in social media. Int J Data Sci Analy. 2022;13(4):271–285. doi:10.1007/s41060-022-00311-6
35. Barua Z, Barua S, Aktar S, Kabir N, Li M. Effects of misinformation on covid-19 individual responses and recommendations for resilience of disastrous consequences of misinformation. Prog Dis Sci. 2020;8:100119. doi:10.1016/j.pdisas.2020.100119
36. Mahmud A, Islam MR. Social stigma as a barrier to covid-19 responses to community well-being in Bangladesh. Int J Community Wellbeing. 2021;4(3):315–321. doi:10.1007/s42413-020-00071-w
37. Islam A, Pakrashi D, Vlassopoulos M, Wang LC. Stigma and misconceptions in the time of the COVID-19 pandemic: a field experiment in India. Soc Sci Med. 2021;278:113966. doi:10.1016/j.socscimed.2021.113966
38. Rocha YM, de Moura GA, Desidério GA, de Oliveira CH, Lourenço FD, de Figueiredo Nicolete LD. The impact of fake news on social media and its influence on health during the COVID-19 pandemic: a systematic review. Z Gesundh Wiss. 2021;1–10. doi:10.1007/s10389-021-01658-z
39. Verma G, Bhardwaj A, Aledavood T, et al. Examining the impact of sharing COVID-19 misinformation online on mental health. Sci Rep. 2022;12(1):8045. doi:10.1038/s41598-022-11488-y
40. Petratos PN. Misinformation, disinformation, and fake news: cyber risks to business. Bus Horiz. 2021;64(6):763–774. doi:10.1016/j.bushor.2021.07.012
41. Bahrami MA, Nasiriani K, Dehghani A, Zarezade M, Kiani P. Counteracting online health misinformation: a qualitative study. Quart J Manag Strate Health Sys. 2019. doi:10.18502/mshsj.v4i3.2056
42. The Merck Manuals. Doctors and patients both have a role to play in addressing medical misinformation online. Available from: https://www.merckmanuals.com/home/news/editorial/2022/11/07/04/04/medical-misinformation.
43. Hilton L Impact and consequences of medical misinformation. Available from: https://www.contemporarypediatrics.com/view/impact-and-consequences-of-medical-misinformation.
44. Walter N, Brooks JJ, Saucier CJ, Suresh S. Evaluating the impact of attempts to correct health misinformation on social media: a meta-analysis. Health Commun. 2020;36(13):1776–1784. doi:10.1080/10410236.2020.1794553
45. Au CH, Ho KKW, Chiu DKW. Stopping healthcare misinformation: the effect of financial incentives and legislation. Health Policy (New York). 2021;125(5):627–633. doi:10.1016/j.healthpol.2021.02.010