Academic Publishing TrendsBusiness of Academic Publishing

How AI is Revolutionizing Scholarly Manuscript Evaluation in Publishing: An Interview with Nishchay Shah

Artificial intelligence (AI) has opened new vistas for scholarly journals and publishers. Today, in order to delve into this transformative technology, we are chatting with Nishchay Shah, Chief Technology Officer (CTO), Cactus Communications. Nishchay shares insights on how AI-powered tools are revolutionizing manuscript evaluation. We’ll also explore how AI is enhancing the peer review process, streamlining editorial workflows, and ultimately, shaping the future of academic publishing. Nishchay expertly navigates the intricate landscape of AI applications, shedding light on the immense potential and challenges in harnessing AI to propel scholarly communication into a new era.

1. How popular at present are AI-powered tools to complement the manuscript evaluation process, and what are the primary motivations for their adoption?

Response: AI-powered manuscript review tools are in early to mid stages but gaining fast traction both in pre-submission (author-facing) and post-submission (publisher-facing) use cases. The primary motivators are efficiency benefits for publishers through time and cost savings. Adoption levels vary across journals and publishers, with many seeing meaningful impact in time to publication.

2. How does AI complement the expertise of human reviewers, and how do you strike the right balance between automated assessments and human judgment?

Response: By providing initial screening and recommendations, AI can reduce the time and effort spent by human reviewers. The right balance of AI and human judgement depends on the journal’s specific needs and resources – AI can filter out clearly poor quality or “risky” manuscripts, but final accept/reject decisions in fuzzy cases would require human judgment for nuance and context.

3. What are the typical challenges that a journal or publisher may encounter while integrating AI into the manuscript evaluation system, and how can they be addressed?

Response: The integration of advanced AI tools into manuscript evaluation systems offers the promise of streamlining editorial processes and enhancing the quality, TAT (Turn Around Time) and speed of the published work. While many publishers have already integrated the tools into their workflow, it’s still a long road ahead for AI to be adopted to become the base assistive layer across the industry.

Some of the typical challenges the publishers face are:

  1. Data Quality and deep understanding of the industry: AI systems require high-quality, unbiased training data for accurate evaluations. The integrity and effectiveness of any AI system heavily rely on the quality of the data used in training. The adage “garbage in, garbage out” holds especially true for machine learning models; subpar or skewed training data can result in a model that is inaccurate or, worse, biased. In the context of manuscript evaluation, data quality becomes an even more sensitive issue, as publishing decisions have far-reaching implications for academic careers and the advancement of knowledge.
  2. Expectations from AI: Publishers often desire a near-perfect system with minimal t to zero false positives. It is important to understand that AI tools are assistive and they can work until a certain extent, while its important to expect speed and efficiency from the tools, its still too early to expect complete takeover of all publication processes with AI.
  3. System Compatibility: Manuscript Submission Systems like Editorial Manager and ScholarOne are often not owned by the publishers so integrating AI becomes a complex task that involves multiple stakeholders.

One solution to all the challenges above is to use tools which are built for the industry and have gone through multiple iterations and have been perfected. Paperpal Preflight is one such tool that is built by CACTUS, who have been in the academia industry for over 21 years and are the frontrunners in AI and Machine Learning powered products.

4. What measures can be taken to protect the integrity of the manuscript evaluation process, ensuring that AI does not introduce any biases or distort the evaluation outcomes?

Response: Measures to reduce biases include the use of diverse training data, auditing techniques, and regular human quality evaluations, which all contribute to maintaining integrity of the process. Although several straightforward decisions can be automated in the case of high AI confidence, critical decisions that require nuance and context should be handled by humans.

5. What role do you see AI playing in the future landscape of peer review, and how do you anticipate it will evolve to address emerging challenges in scholarly publishing?

Response: AI has a lot of potential to help out in the peer review process for scientific articles. But it’s not all smooth sailing. One big issue is that AI could help spread false or incorrect information. Since we rely on good science to build new discoveries, it’s crucial to make sure AI gets it right. People are also generally skeptical about trusting AI in this area, especially if it makes mistakes.

As AI gets better, it will do more things like initial checks of submitted articles, flagging sketchy submissions, suggesting who could review the article, and catching errors that human editors might miss. However, the better AI gets, the better it also becomes at creating fake or fraudulent articles. So it’s like a cat-and-mouse game where the AI has to keep improving to catch these issues.

Going forward, AI tools will need to keep getting better and smarter to keep up with new challenges. Even with all this tech, humans are still crucial for making fair and ethical choices in what gets published. The best way ahead is to let AI do what it’s good at, but always have people involved to make the final decisions and catch anything the machines miss. This means being open to trying new things but doing so carefully and responsibly.

Leave a reply

Your email address will not be published. Required fields are marked *