bash remove duplicate lines from a file

# Basic syntax:
sort input_file | uniq --unique
# Sort the file first because uniq requires a sorted file to work. Uniq
# then eliminates all duplicated lines in the file, keeping one instance
# of each duplicated line

# Note, this doesn't return only non-duplicated lines. It returns
# unique instances of all lines, whether or not they are duplicated

# Note, if you want to return only one instance of all lines but see
# the number of repetitions for each line, run:
sort input_file | uniq -c

Are there any code examples left?
Made with love
This website uses cookies to make IQCode work for you. By using this site, you agree to our cookie policy

Welcome Back!

Sign up to unlock all of IQCode features:
  • Test your skills and track progress
  • Engage in comprehensive interactive courses
  • Commit to daily skill-enhancing challenges
  • Solve practical, real-world issues
  • Share your insights and learnings
Create an account
Sign in
Recover lost password
Or log in with

Create a Free Account

Sign up to unlock all of IQCode features:
  • Test your skills and track progress
  • Engage in comprehensive interactive courses
  • Commit to daily skill-enhancing challenges
  • Solve practical, real-world issues
  • Share your insights and learnings
Create an account
Sign up
Or sign up with
By signing up, you agree to the Terms and Conditions and Privacy Policy. You also agree to receive product-related marketing emails from IQCode, which you can unsubscribe from at any time.
Creating a new code example
Code snippet title
Source