Try tail:
tail -n +2 "$FILE"
-n x
: Just print the last x
lines. tail -n 5
would give you the last 5 lines of the input. The +
sign kind of inverts the argument and make tail
print anything but the first x-1
lines. tail -n +1
would print the whole file, tail -n +2
everything but the first line, etc.
GNU tail
is much faster than sed
. tail
is also available on BSD and the -n +2
flag is consistent across both tools. Check the FreeBSD or OS X man pages for more.
The BSD version can be much slower than sed
, though. I wonder how they managed that; tail
should just read a file line by line while sed
does pretty complex operations involving interpreting a script, applying regular expressions and the like.
Note: You may be tempted to use
# THIS WILL GIVE YOU AN EMPTY FILE!
tail -n +2 "$FILE" > "$FILE"
but this will give you an empty file. The reason is that the redirection (>
) happens before tail
is invoked by the shell:
- Shell truncates file
$FILE
- Shell creates a new process for
tail
- Shell redirects stdout of the
tail
process to $FILE
tail
reads from the now empty $FILE
If you want to remove the first line inside the file, you should use:
tail -n +2 "$FILE" > "$FILE.tmp" && mv "$FILE.tmp" "$FILE"
The &&
will make sure that the file doesn't get overwritten when there is a problem.
Because that’s how the POSIX standard defines a line:
- 3.206 Line
- A sequence of zero or more non- <newline> characters plus a terminating <newline> character.
Therefore, lines not ending in a newline character aren't considered actual lines. That's why some programs have problems processing the last line of a file if it isn't newline terminated.
There's at least one hard advantage to this guideline when working on a terminal emulator: All Unix tools expect this convention and work with it. For instance, when concatenating files with cat
, a file terminated by newline will have a different effect than one without:
$ more a.txt
foo
$ more b.txt
bar$ more c.txt
baz
$ cat {a,b,c}.txt
foo
barbaz
And, as the previous example also demonstrates, when displaying the file on the command line (e.g. via more
), a newline-terminated file results in a correct display. An improperly terminated file might be garbled (second line).
For consistency, it’s very helpful to follow this rule – doing otherwise will incur extra work when dealing with the default Unix tools.
Think about it differently: If lines aren’t terminated by newline, making commands such as cat
useful is much harder: how do you make a command to concatenate files such that
- it puts each file’s start on a new line, which is what you want 95% of the time; but
- it allows merging the last and first line of two files, as in the example above between
b.txt
and c.txt
?
Of course this is solvable but you need to make the usage of cat
more complex (by adding positional command line arguments, e.g. cat a.txt --no-newline b.txt c.txt
), and now the command rather than each individual file controls how it is pasted together with other files. This is almost certainly not convenient.
… Or you need to introduce a special sentinel character to mark a line that is supposed to be continued rather than terminated. Well, now you’re stuck with the same situation as on POSIX, except inverted (line continuation rather than line termination character).
Now, on non POSIX compliant systems (nowadays that’s mostly Windows), the point is moot: files don’t generally end with a newline, and the (informal) definition of a line might for instance be “text that is separated by newlines” (note the emphasis). This is entirely valid. However, for structured data (e.g. programming code) it makes parsing minimally more complicated: it generally means that parsers have to be rewritten. If a parser was originally written with the POSIX definition in mind, then it might be easier to modify the token stream rather than the parser — in other words, add an “artificial newline” token to the end of the input.
Best Answer
You have some problems to solve here.
Using slashes (
/
) insides///
First, you want to replace
wordToFind1
with/usr/bin/try.txt
. It will not work with thes///
command at first, because the replacing string contains/
. It would to a very weird command!Sed will think that the command is
s/wordToFind1//
with some flags (such asu
) and other commands following, but it makes no sense and it will generate an error. A solution is to escape each/
from/usr/bin/try.txt
with\
:This is clumsy, however. When you have a lot of
/
in your replacing string (or even in the replaced string), a better solution IMHO is to use another character as the delimiters ofs///
. Not everybody knows it is possible, but one can use any other char instead of/
as the delimiter ofs///
. In this case, you can use as much/
as you want inside your expressions without needing of escaping them. In the example below, I am using#
instead of/
, so the slashes from/usr/bin/try.txt
cause no trouble:Using more than one
s///
commandSolved that, you should replace
wordToFind2
too. This is easy: just pass another-e
command in the same sed invocation:(Another option is to add more than one option in one string only, separarated by semicolons:
I find it very useful sometimes, with bigger sed scripts, but it is less readable as well).
Updating the input file with
-i
Now, you need to update the
Check.sql
file. This is easy as well: just pass the-i
flag to sed. This flag makes sed update the original file. Also, this flag can receive a parameter, that is an extension to be added to a backup file with the original content. In this case, I will use the.bkp
extension. See the results:Now,
Check.sql
changed. Also, there is aCheck.sql.bkp
with the old content:This may be helpful if something goes wrong.