/
Education
Redundant Publications and Salami Slicing in Research

Redundant Publications and Salami Slicing in Research

Profile image of Justin Scott

Justin Scott

@JustinScott

0

292

0

Share

What Is a Redundant Publication?

A publication is considered redundant when:

  • A previously published work—or substantial sections of it—is published again (in the same or another language) without proper acknowledgment, cross-referencing, or valid justification, or
  • The same or significantly overlapping data is used in multiple publications without informing the reader or reviewer, making it hard for them to realize that the findings have already been reported.

This isn’t just about text. Redundant publication also includes figures, charts, or datasets used elsewhere before. For example, if an author reuses a figure from a previously published blog, journal article, abstract, or even lecture notes, that content is considered already published. Whether it's a self-made graph or a clinical image, once it’s published, it shouldn’t appear again without citation—and in many peer-reviewed journals, authors may need to transfer copyright of that reused figure to the new publisher.

The same applies to datasets. If a dataset has been published already, it cannot be republished in full. You can use parts of it for new articles, but republishing the entire dataset, even with new commentary, is unethical unless properly referenced and justified.

Salami Slicing: Splitting One Study Into Multiple Papers

Salami slicing, or salami publication, refers to breaking one large research study into multiple smaller publications. Unlike duplicate publication—where the exact same content appears in more than one place—salami slicing spreads different parts of the same study across various journals.

These smaller segments, often called “slices,” may seem like independent studies, but they actually come from the same dataset or experiment. If these slices share the same hypothesis, population, and methodology, this is considered unacceptable—even if the data is presented with different interpretations.

Why is this a problem? Because it:

  • Misleads readers into thinking each paper presents new data from different participants.
  • Skews the scientific literature, artificially boosting the perceived volume of evidence.
  • Wastes time—of editors, reviewers, and readers.
  • Inflates the author’s publication and citation record, unfairly.

That’s why most journals require authors to disclose if a manuscript includes data from a larger study already published elsewhere. They also expect authors to submit any related papers—published or not—so the editorial team can properly evaluate the work’s originality.

Why Researchers Split Data (and Why It’s a Problem)

Academic pressure is real. Researchers are often expected to publish frequently, which can tempt some to stretch a single dataset into several papers. At first glance, this might seem smart—it increases publication count, adds to one’s CV, and may help with tenure or funding.

But ethically? It’s a gray area.

While not as serious as fraud or plagiarism, over-publishing:

  • Wastes editorial resources
  • Leads to misinterpretation of data
  • Dilutes the scientific value of the study
  • Can hurt the credibility of the findings

For instance, when results are split across papers, statistical significance can be lost. A compelling finding might become a few weak or incomplete results—and that can confuse or mislead the scientific community.

This practice is sometimes jokingly called the "least publishable unit"—squeezing out the smallest amount of data necessary to justify a publication.

As one editorial put it:

“Nobody is well served by the practice of reporting the same study in two journals, publishing a review of the same subject nearly simultaneously, or splitting a study into several parts for separate submission.”

Are There Ever Justifiable Reasons for Redundant Publication?

Sometimes, yes.

There can be valid reasons to republish or segment findings, such as:

  • Reaching different audiences (e.g., a clinical vs. a technical readership)
  • Focusing more deeply on a specific subset of results
  • Overcoming journal-imposed word limits

But here's the key: transparency is everything. If authors have a legitimate reason to reuse or segment their work, they should declare it upfront. Otherwise, people will likely assume it was done to mislead or manipulate.

Not citing earlier work based on the same dataset or failing to cross-reference overlapping content suggests intentional duplicity. It implies that the authors may be trying to trick readers and editors into thinking each paper is a standalone, novel contribution.

What Does COPE Say?

The Committee on Publication Ethics (COPE) emphasizes the importance of citation and disclosure. According to COPE:

“When the same (or substantially overlapping) data is presented in more than one publication without adequate cross‐referencing or justification, reviewers and readers are unlikely to realise that most or all the findings have been published before.”

COPE encourages researchers to act ethically and not fragment their work without good reason.

Final Thoughts: Think Before You Slice

Researchers should always think carefully about:

  • How they present their work
  • Where they submit it
  • Who their intended audience is

If authors choose to publish multiple papers from the same dataset, they need to clearly explain why and make all related work available for review. Otherwise, the decision will look suspicious—and their professional integrity might take a hit.

Redundant publication or salami slicing might offer short-term rewards, but it risks long-term damage to credibility, trust, and reputation. In the worst-case scenario, it may even lead to article retractions, blacklisting by journals, or academic sanctions.

In short: don’t try to game the system. Be honest about your work and respectful of your readers’ time and trust.


0

292

0

Share

Similar Blogs

Blog banner
profile

Justin Scott

Updated on 29 Jul 2025

@JustinScott

Best Practices for Ethical and Credible Research Publishing

Explore key research ethics with COPE, WAME, DOAJ, and OASPA to ensure credibility, transparency, and integrity in your publishing process.