I don't think you'll really get anywhere without reworking the way things are done right now.
One example is that during development we only focus on new features
and ignore existing bugs and even introduce new bugs.
I am fairly certain that this is a grave mistake. Try to explain that bug fixing becomes more difficult and more expensive the longer it is postponed. I don't think there is any excuse for postponing bug fixes for so long.
All agile methods I know emphasize that the software should be technically ready to ship at the end of each cycle/iteration (or even continuously). That means any bug that you wouldn't want to ship must be fixed before any new features is started.
Try to work with people, and explain your concerns.
Finally, if you cannot get any kind of agreement on such basics, it may be best to move on.
Edit
Based on your amended question: I agree that bug fixing should be handled just like new features. I'd normally have single tasks for "small" bugs (up to a few hours) and stories with subtasks for "big" bugs (multiple days, possible to split into tasks). If your colleagues do not agree, point out that the advantages of agile methods apply to bug fixing just like to new features.
As to wheter to use an agile tracking tool or a bug database: Ideally, the two are integrated. If they are not, then yes, you'll have to track work in both. It's annoying, but I've done it myself, and didn't find it a big problem. You track bugs as usual, and just copy&paste the bug id into the title of a task/story when you schedule a bug to be worked on. The task is just a link to the bugtracker (eays if both are web-based), and all discussion is in the bug db. If you link consistenly, you could probably even run reports across both systems.
Finally, it you believe you cannot change the way development is organized, you'll have to work within these constraints. Things like introducing some agile methods into the bug fixing phase will probably help.
Still, I am convinced that splitting development into a "feature phase" and a "bugfixing phase" is a grave, possibly fatal mistake. It may not be so serious for your company (every situation is different), but if it is, and if your organization is unable to change this, then your organization may be so dysfunctional that you'd be better off elsewhere. But that is your call to make... good luck!
Edit 2
Finally, if company policies dictate "features first, bug fixing later", maybe you can sidestep them, e.g. by redefining bug fixes as new features (the difference is often a matter of opinion anyway). To some extent, you can try to introduce new ideas "from below" that way. You know, "if you can't beat them, join them".
"...finds a bug, the report goes into a bug tracking database and also becomes a story which should be prioritized just like all other work.
The question is, should bug tracking and feature tracking be different, and can you use a single system to do both as well as schedule iterations/milestones/etc...
In terms of a "pure" Agile approach, you allow your team to use any combination of tools and processes that works well for them. Sure, you may find a single product that does everything, but perhaps it doesn't do some things as well as you'd like. If you run multiple systems, you need to determine just how integrated they need to be, and if any integration is needed, find the means to do it, and decide just how much information needs to be duplicated. It all boils down to a cost/benefit situation, so naturally any system employed needs to take into account the impact on a team's overall efficiency.
Where I work, we use a Redmine system to track bugs and features in a single system for multiple projects, with links between each project where dependencies exist. We create labels that relate to milestones, which for us are effectively long iterations that may range anywhere from a matter of weeks to a matter of months. For individual tasks and features, we tend not to track iterations too closely, so we have no need to worry about burn-down charts, white boards, sticky notes, feature cards and all of that stuff, as we've found that for our specific needs, some of this stuff is overkill. Each feature itself effectively represents small iterations of between 2-10 days duration, and for those that might care, we log our estimates of time versus actual time for later analysis. This may sound a little ad-hoc, but works for us and ultimately our real measure is working code within a series of time frames.
I suppose if we decided to employ another more formally "regimented" methodology, we might consider a tool to aid in tracking progress, but with what we currently have invested in our present method, we'd probably feed at a minimum the short feature descriptions and time data to another system, unless someone has developed a module for Redmine that does what we want it to, or if it became really important to us, we might create the Redmine module ourselves to avoid any nasty integration headaches that might concern us.
Best Answer
First off, almost nothing in @DXM's answer matches my experience with Agile, and especially not with Scrum.
The Agile Manifesto states that while comprehensive documentation is valuable, working software is MORE valuable. So, documentation is certainly not a bad thing, but it should truly be in service to creating working software.
Nailing down every detail before beginning to code has proven wasteful again and again, so documentation is generally dealt with in a JIT (just in time) manner. That is, you document what you are actually going to code.
One of the popular ways of doing Scrum is to use User Stories, which are maintained by the Product Owner and kept in the Product Backlog. The Product Backlog is a fairly high-level list of all the stuff that a solution needs to do, and a User Story is generally a nicely sized way to describe each thing on the list. User Stories aren't mandatory, but they do seem to be a good way to not overdo the details and inspire collaboration instead.
So, anyway, when a story is done--the team has created, tested, and deployed something that meets the acceptance criteria, the story is not CHUCKED, it is simply marked done on the backlog--so the backlog does have some indication of what was done in each sprint--stories and the points associated with them. This is what allows you to calculate velocity, and is valuable documentation in and of itself.
All that said, a User Story may be all the documentation needed to understand a requirement, but more likely, it is something to generate a conversation between the customer and the development team. As such, there are any number of things you can do around that conversation. If it's a face-to-face ad hoc thing, as it often is, the analyst/developers can (and possibly, depending on your organization, should) write down whatever decisions were made and save it somewhere, such as a Wiki or a documentation repository. If it's an email conversation, you can save the emails. If it's a whiteboard session, take a picture of the board with your mobile and save it. The point is, these things are what are helping you get the code done, and might be able to help you later if you need to figure out how or why you did it the way you did.
Another method of capturing requirements is to immediately embed them into test cases (which I believe is what DXM was getting at). This can be really efficient, in that you need to test for each requirement anyway. In this case, you can effectively store your requirements in your testing tool.
If a story is completed (and accepted) and then the user changes his need, well, then you probably need to create a new story. If you use a wiki for your documentation, you can link the new story back to the original, and likewise link that original story forward to the new stuff so that someone looking at it knows that things changed. That's the nice thing about wikis -- it's easy and fairly painless to link stuff. If you are doing the test driven approach, you'd either update the test case to deal with the change, or create new test cases for the new story if the new and old aren't mutually exclusive.
In the end, it depends on what your need is. If the main thing is to get folks up to speed quickly, it's probably a good idea for someone to write an onboarding document to help them out. So have someone do that. As I mentioned, Wiki's are a great tool for keeping this sort of thing, so you might consider Atlassian's solutions which can integrate the Confluence Wiki with Jira and Greenhopper for tracking your stories/tasks/defects and managing your project in general. There's lots of other tools out there to choose from as well.