Featured Blogs
Menu
  • Dealing with your Data

    by Greg Margason | Nov 22, 2024

    Let’s face it, sometimes data can be dry. As researchers and practitioners, our goal is to collect and publish data, but also to share this information in an accessible and engaging manner. How then do we make numbers or text more intriguing so larger audiences can understand it and possibly apply it?

     Here are 10 strategies to employ:

    • Tell a story: Storytelling is a timeless and impactful approach to communicating ideas. By interweaving your data with a story (i.e., about a population, about a problem, etc.), you can take abstract data and use it to draw people in and get your audience invested. 
    • Build a narrative: Crafting a narrative around your findings gives data context and continuity. Sharing how the research question emerged, who the findings may impact, or highlighting connection to the real-world can ‘connect the dots’ and keep the audience interested. Incomplete story telling can lead to incorrect assumptions about your data. 
    • Use color and font thoughtfully: Simple changes such as a high catching tone or color banding header can do wonders. In a practical sense, color or formatting can emphasize key points, add clarity, and aid in comprehension.  These are powerful steps towards translating your data’s message.
    • Use text wisely: The dreaded over texted slide or poster [inset groan here]. We have all seen these. This is where editing, bullets (think rule of 6 to keep bullets about 6 words), and concise writing skills will make a difference in getting your reader to understand your content. We are all guilty of using too much scientific jargon but as a prominent journalist once told me, “If you can’t explain it simply, you don’t know your information that well”. Writing in plain, nonscientific language will make your data more applicable to a wider range of audiences. Having a non-expert review your work can help provide an objective impression of the data being displayed. 
    • Include axes and labels on graphs: This may seem obvious, but clear axes and labels help viewers of all levels interpret data, identify patterns, and grasp the data’s main story.  
    • Data visualization: Visuals can help make data more memorable and appealing. You can use a variety of visuals such as charts, graphs, maps, images, or icons depending on the type and amount of data. Infographics, or graphical abstracts, can also be used to communicate research through unique and purposeful mechanisms.
    • Use predictable patterns: Consistent colors, fonts, and layout can bring clarity and help key points stand out.  Position your main points in memorable locations on the page, just as eating establishments direct your attention to popular or pricey menu items. Keeping to the same font, color, scheme will build repetition and help your reader remember important information. 
    • Interactive flowcharts and bubble charts: These can be very helpful for teaching or making data driven decisions. It is essential to know which variables need to be communicated when selecting the chart or graph type that best suits the data at hand. 
    • Ask good questions: Approaching data with curiosity and a commitment to integrity fosters more compelling results. By pairing genuine interest with a desire to make your findings meaningful for your audience, your enthusiasm will resonate and bring your work to life.  

    At the end of the day, what’s the point of presenting data if no one bothers to read it? “The medium is the message” might not be the whole story, but the medium is certainly important. By making your data as transparent and meaningful as possible, you are contributing to the scientific field and reaching individuals and communities in ways that are connected to real-world needs. 

    Written by members of the ACSM Science Communication Collective:

    Rafael Alamilla
    Rafeal Alamilla, MS,
     is a Ph.D. Candidate in the Department of Health Science at Indiana University in Indianapolis. Rafael’s research aims to promote physical activity among racial minority adults using community-based and theory-driven approaches. He currently serves on the ACSM Student Affairs Committee and is a member of the ACSM Science Communication Collective.


    Rachelle Reed
    Rachelle Reed, PhD,
     is the Senior Manager of Scientific Research and Science Communication at Therabody. Rachelle also chairs the Continuing Education Committee on ACSM’s CCRB and is the industry representative for the Worldwide Fitness Trends working group.


    Laura Young
    Laura Young, PhD,
     is the Scientific Affairs Program Manager at ACSM and adjunct faculty member in the Exercise Science Department at Wenatchee Valley College in Wenatchee, Washington.

  • Finding Green Flags in Journal Quality

    by Caitlin Kinser | Nov 21, 2024

    In honor of Publication Integrity Week, and with so many scientific journals to choose from, it is important to discuss how to determine journal quality. There are several indexes that provide journal rankings using a variety of algorithms. One of the most well-known is the impact factor (IF) which is calculated by Clarivate Analytics for journals indexed in the Web of Science. IF calculations are part of Clarivate’s Journal Citation Reports, which also provide a number of other journal quality metrics. A journal’s IF is the ratio of the number of citations in a given year to the number of articles published in the previous two years. Essentially, the IF is the average number of times that articles published in a journal in the last two years have been cited.

    The h-index was originally developed to indicate the impact and productivity of individual authors. Now the h-index is also used as a quality marker for journals. The h-index can be generated by data from a number of sources, such as the Web of Science and Google Scholar, and is the highest number of articles a journal has published that have been cited at least that many times. Google Scholar also generates an h5-index which only includes articles published in the most recent five years, and so is a better indicator of a journal’s current influence.

    The SCImago Journal Rank (SJR) uses data from journals indexed in Scopus and is based on not only how many citations a journal receives, but also where they come from. The SJR is calculated as the average number of citations (weighted based on citing journal prestige and closeness) to articles published in a journal in the previous three years.

    Just being indexed in one of the databases such as the Web of Science and Scopus, or MEDLINE (the primary part of PubMed) and the Directory of Open Access Journals, are marks of journal quality because there are a set of minimum criteria (e.g., article quality and consistency, ethical standards, and journal acceptance rates) that journals must meet to be included. Note that, while Google Scholar includes journal data, it is not considered a journal index due to the lack of quality criteria for inclusion.

    Finally, journals from reputable organizations, such as ACSM, are held to standards and have oversight by the parent organization/college. For example, the ACSM Publications Committee members and Editorial Services Office staff ensure that ACSM journals are held to the highest quality standards. Another benefit of publishing in one of the ACSM journals is that it ensures a large readership distribution. 

    Lisa Griffin, PhD, is an Associate Professor in the Department of Kinesiology and Health Education. And the Director of the Movement and Cognitive Rehabilitation Science Graduate Program (MCRS) at the University of Texas at Austin. Dr. Griffin is the Editor-in-Chief for theTranslational Journal of the American College of Sports Medicine (TJACSM).  

     

  • AI in Publication Ethics

    by Caitlin Kinser | Nov 20, 2024

    The future of publication ethics is likely to be significantly influenced by continued advances in artificial intelligence (AI) and the evolving landscape of the peer review process.  These developments could bring both opportunities and challenges in ensuring the integrity, transparency, and quality of scientific publishing.  The following are several key areas where AI and peer review intersect, and how they may shape the future of publication ethics: 

    AI in the Peer Review Process 

    AI could be used to enhance the initial screening of manuscripts. For example, AI tools could quickly detect potential issues such as plagiarism, statistical errors, or methodological flaws before the paper enters the peer review process. This would ensure that only high-quality manuscripts proceed to human review, improving the overall efficiency of the editorial workflow. 

    AI can help editors select the most appropriate reviewers based on their expertise publications. By using algorithms to match submissions with reviewers who have the right qualifications and no conflicts of interest, AI could streamline the review process and ensure that manuscripts are reviewed by experts in the field. 

    AI can be employed to detect biases in peer review, such as gender, institutional, or geographical. By analyzing past review patterns, AI could identify instances where reviewers may have unfairly rated certain manuscripts based on unrelated factors, and this information could be used to improve the objectivity of the review process. 

    AI-Generated Content and Authorship 

    As AI tools become more advanced, they could potentially assist in writing or even generate entire research papers. While these papers may be technically correct, they might lack originality or fail to reflect the author's intellectual input. Journals will need to develop methods to detect AI-generated content and determine whether it meets the ethical standards for authorship. 

    Questions may arise regarding the ethical implications of AI involvement in research authorship. If an AI tool contributes significantly to the content, who should be credited as an author? Should AI tools be acknowledged separately as contributors, or is this a form of "ghost authorship" that misrepresents the nature of intellectual labor in research? 

    Authors and journals will need clear guidelines about disclosing the use of AI in the research process. Ethical publication practices will require transparency about how AI was used in writing, data analysis, or interpretation to avoid misleading readers and maintaining the integrity of the research process. 

    AI in Data Integrity and Research Verification 

    AI algorithms can be used to assist in verifying research data, detecting anomalies, inconsistencies, or fraudulent manipulation of data. AI-powered tools could scan datasets for errors, helping to ensure that research findings are valid and reproducible. This could prevent the publication of fraudulent or misleading research. 

    AI could help improve the transparency and reproducibility of research by automating the process of checking whether results can be replicated based on shared data and methods. This would significantly improve the reliability of scientific literature. 

    The use of AI tools to analyze and verify data may raise privacy concerns, especially in fields like medical research where patient data is involved. Ensuring that AI tools respect data privacy and confidentiality will be crucial in maintaining ethical standards. 

    AI and Publication Ethics Governance 

    AI tools could assist journal editors and editorial staff in monitoring compliance with publication ethics, such as adherence to the Committee on Publication Ethics (COPE) guidelines. AI could automatically identify instances where ethical standards are at risk or where conflicts of interest may not have been disclosed. 

    As AI continues to shape the publishing landscape, there will likely be a need for more global standards for publication ethics. AI can facilitate the creation of these standards by analyzing trends in publishing practices and providing data-driven insights to guide policy-making at the journal level. 

    AI models trained on data from predominantly Western research environments may inadvertently reflect biases that marginalize non-Western scholars and research perspectives. To ensure equity and fairness, AI tools must be designed to account for diverse global contexts, and ethical guidelines should be developed with inclusivity in mind. 

    Enhancing Author and Reviewer Education 

    AI tools could help educate authors and reviewers about publication ethics, helping them understand issues such as conflict of interest, research misconduct, peer review etiquette, and proper citation. This could improve the quality and integrity of the review process. 

    AI has the potential to reduce human bias in the review process, but it can also perpetuate or even amplify existing biases if the underlying data and algorithms are not carefully managed. It's important that AI tools used in publishing are continuously updated to reflect diverse, unbiased perspectives. 

    Conclusion 

    As AI continues to transform the publication landscape, it offers great promise for improving efficiency, transparency, and integrity in scientific publishing. However, this transformation also brings with it new ethical challenges that must be carefully managed. Editors, editorial boards, publishers, and researchers will need to collaborate to ensure that AI tools are used responsibly, with appropriate safeguards in place to protect the authenticity of scientific research, uphold the quality of peer review, and ensure the fair treatment of all authors and reviewers. 

    The future of publication ethics will be a delicate balance between leveraging AI to improve workflows and ensuring human oversight to preserve the integrity of the scientific process. Ethical governance will need to evolve in tandem with these technological advancements, ensuring that AI enhances and does not undermine the ethical principles that are the foundation of scientific publishing. 

     

    ChatGPT was used in the writing of this blog post. The following prompt was used to develop the content of the post: “Role of AI in publication ethics”.  

    Jeffrey Potteiger, PhD, FACSM, serves as interim dean of the College of Health Professions at Grand Valley State University. He has served a number of roles supporting ACSM publications, including book author, on the editorial boards for several journals, as well as past chair of the ACSM Publications Committee. Dr. Potteiger currently serves as the Scientific Integrity Editor on the editorial board of Medicine & Science in Sports & Exercise®

  • The Perils of Plagiarism

    by Caitlin Kinser | Nov 18, 2024

    Plagiarism, the act of presenting someone else's work or ideas as your own, is unfortunately a pervasive concern in academic publishing. It undermines professional integrity, stifles creativity, tarnishes reputations, and threatens innovation.

    One of the many dangers of plagiarism is the erosion of trust. In academia, students who plagiarize risk failing assignments or even full courses, facing disciplinary action, and marring their credibility as future professionals. The erosion of trust can have long-term effects on both educational and career trajectories. Of course, plagiarism can also result in the misattribution of credit for ideas and discoveries, again impacting career trajectories. 

    In professional settings, plagiarism can lead to legal repercussions, career termination, and even result in damage to an institution’s reputation that is difficult to mend.

    Plagiarism stifles originality and innovation, acting as a barrier to the development of new viewpoints and voices. It hampers personal growth and limits discovery and the advancement of knowledge. A secondary danger of plagiarism is the risk of perpetuating misinformation. Simply reproducing the thoughts of others without thinking critically about the content may inadvertently proliferate inaccuracies or outdated information, again undermining the very advancement of knowledge that is the goal of academic publishing.

    Another threat brought about by plagiarism is a sense of complacency. Cutting corners rather than investing the care required to communicate original thoughts, ideas, and discoveries risks creating a culture where mediocrity is tolerated.

    As part of the submission and peer review process for our scientific journals, ACSM is leveraging advanced tools to help ensure the originality of works under consideration for publication in one of the six scholarly journals. Working with ACSM publishing partner Wolters Kluwer, the ACSM Editorial Services Office staff and journal editors implemented the Crossref Similarity Check tool (powered by iThenticate) across the ACSM journal portfolio. This plagiarism checking tool allows journal staff and editors to provide an additional preventative measure against plagiarism.

    We have also brought on board our first Science Integrity Editor, Dr. Jeffrey Potteiger. Serving in this position, Dr. Potteiger worked with Dr. Andy Jones, the Editor-In-Chief of Medicine & Science in Sports & Exercise® (MSSE), to establish the procedures for responding to allegations of misconduct in research, scholarship and scientific activities for manuscripts and papers published in MSSE. These are important steps taken to help to safeguard the College, its membership, and the publishing enterprise at ACSM. However, more importantly, ACSM is steadfast in its commitment to fostering innovation, authenticity, and integrity in its publishing enterprise. 

    Karyn L. Hamilton, RD, PhD, FACSM, is a member of the faculty at Colorado State University. She serves as a professor in the Health and Exercise Science department, the Director of the Translational Research on Aging and Chronic Disease Lab and Associate Director of the Center for Healthy Aging. She earned her bachelor's and master's degrees at Montana State University and her PhD at the University of Florida where she worked in Scott Powers' lab. She is chair of the ACSM Publications Committee.