For all unstaged files in current working directory use:
git checkout -- .
For a specific file use:
git checkout -- path/to/file/to/revert
--
here to remove ambiguity (this is known as argument disambiguation).
For Git 2.23 onwards, one may want to use the more specific
git restore .
resp.
git restore path/to/file/to/revert
that together with git switch
replaces the overloaded git checkout
(see here), and thus removes the argument disambiguation.
git-clean - Remove untracked files from the working tree
Synopsis
git clean [-d] [-f] [-i] [-n] [-q] [-e <pattern>] [-x | -X] [--] <path>…
Description
Cleans the working tree by recursively removing files that are not under version control, starting from the current directory.
Normally, only files unknown to Git are removed, but if the -x
option is specified, ignored files are also removed. This can, for example, be useful to remove all build products.
If any optional <path>...
arguments are given, only those paths are affected.
Step 1 is to show what will be deleted by using the -n
option:
# Print out the list of files and directories which will be removed (dry run)
git clean -n -d
Clean Step - beware: this will delete files:
# Delete the files from the repository
git clean -f
- To remove directories, run
git clean -f -d
or git clean -fd
- To remove ignored files, run
git clean -f -X
or git clean -fX
- To remove ignored and non-ignored files, run
git clean -f -x
or git clean -fx
Note the case difference on the X
for the two latter commands.
If clean.requireForce
is set to "true" (the default) in your configuration, one needs to specify -f
otherwise nothing will actually happen.
Again see the git-clean
docs for more information.
Options
-f
, --force
If the Git configuration variable clean.requireForce is not set to
false, git clean will refuse to run unless given -f
, -n
or -i
.
-x
Don’t use the standard ignore rules read from .gitignore (per
directory) and $GIT_DIR/info/exclude
, but do still use the ignore
rules given with -e
options. This allows removing all untracked files,
including build products. This can be used (possibly in conjunction
with git reset) to create a pristine working directory to test a clean
build.
-X
Remove only files ignored by Git. This may be useful to rebuild
everything from scratch, but keep manually created files.
-n
, --dry-run
Don’t actually remove anything, just show what would be done.
-d
Remove untracked directories in addition to untracked files. If an
untracked directory is managed by a different Git repository, it is
not removed by default. Use -f
option twice if you really want to
remove such a directory.
Best Answer
Forget what Perl Best Practices says. It's not the bible, and it merely suggests using RCS keywords because at the time it was written no one was thinking about other source control systems. Your goal should never be compliance with PBP's particular implementation, but adapting the ideas in PBP to your own situation. Remember to read the first chapter of that book.
First, let's fix your assumptions:
You don't need a separate version for each module in a distribution. You only need to give each module file a version that is different from that in previous distributions. Every module in the distro can have the same version, and when they do, they can all still be greater than the version from the last distro.
Why not change the versions of a rapidly changing module manually? You should have defined points where your code becomes something that people can use. At those points, you do something to say that you've made the decision that your work product should be distributed, whether as a test or stable release. You change the versions as a way to tell people something about your development. When you let the source control system do that merely because you committing, you're losing your chance to denote cycles in your development. For instance, I typically use two place minor releases. That means I get 100 releases before the collating goes out of wack and I need to bump the major version to restore a proper sort order. That's not enough version space if I let the VCS handle this for me.
I used to use RCS keywords to link the versions of my modules to their checkin or revision number, but I never really liked that. I make many commits to a file before it's ready to be the next version, and I don't need
$VERSION
changing merely because I fixed a documentation typo. There would be big jumps in version numbers because I had made a lot of little changes.Now I just change the versions of all of my module files when I'm ready to release a new distribution. I use the ppi_version to change all the versions at once:
All of my module files get the same
$VERSION
. I don't need to use$VERSION
to tell them apart because I use normal source control features to do that. I don't need$VERSION
to tie it to a specific commit.If I'm working toward a new distribution from version 1.23, I start making development versions 1.23_01, 1.23_02, and so on, but only when I'm ready to let people try those versions. I change the version at the start of the cycle, not the end. All of my commits leading up to the next release already have their next version. I also make notes about what I want that cycle to accomplish.
When I think it's the start of a new cycle, I bump the version again. When I think I have a stable release, I change the development
$VERSION
to a stable one, like 1.23_04 to 1.24. Whenever I release something, I tag it in source control too. I can see where my major points of development line up with source control quite easily.Everything is much easier this way for me. Nothing is tied to the source control that I decide to use, so I don't have to redo everything if I change what I use.