I don't think you'll really get anywhere without reworking the way things are done right now.
One example is that during development we only focus on new features
and ignore existing bugs and even introduce new bugs.
I am fairly certain that this is a grave mistake. Try to explain that bug fixing becomes more difficult and more expensive the longer it is postponed. I don't think there is any excuse for postponing bug fixes for so long.
All agile methods I know emphasize that the software should be technically ready to ship at the end of each cycle/iteration (or even continuously). That means any bug that you wouldn't want to ship must be fixed before any new features is started.
Try to work with people, and explain your concerns.
Finally, if you cannot get any kind of agreement on such basics, it may be best to move on.
Edit
Based on your amended question: I agree that bug fixing should be handled just like new features. I'd normally have single tasks for "small" bugs (up to a few hours) and stories with subtasks for "big" bugs (multiple days, possible to split into tasks). If your colleagues do not agree, point out that the advantages of agile methods apply to bug fixing just like to new features.
As to wheter to use an agile tracking tool or a bug database: Ideally, the two are integrated. If they are not, then yes, you'll have to track work in both. It's annoying, but I've done it myself, and didn't find it a big problem. You track bugs as usual, and just copy&paste the bug id into the title of a task/story when you schedule a bug to be worked on. The task is just a link to the bugtracker (eays if both are web-based), and all discussion is in the bug db. If you link consistenly, you could probably even run reports across both systems.
Finally, it you believe you cannot change the way development is organized, you'll have to work within these constraints. Things like introducing some agile methods into the bug fixing phase will probably help.
Still, I am convinced that splitting development into a "feature phase" and a "bugfixing phase" is a grave, possibly fatal mistake. It may not be so serious for your company (every situation is different), but if it is, and if your organization is unable to change this, then your organization may be so dysfunctional that you'd be better off elsewhere. But that is your call to make... good luck!
Edit 2
Finally, if company policies dictate "features first, bug fixing later", maybe you can sidestep them, e.g. by redefining bug fixes as new features (the difference is often a matter of opinion anyway). To some extent, you can try to introduce new ideas "from below" that way. You know, "if you can't beat them, join them".
I think it really depends on the type of project and what kind of decisions you must made. I.e., what is the purpose of test plan for you.
Here I will share some experience regarding our project.
In our project and team testing means:
- preparing test environments, required for system tests
- preparing test data: getting real data from production or simulating
- automating tests at unit, integration, system, system integration level.
- writing mocks, test drivers, data simulators, test frameworks
- demonstrating development output
So planning tests is about making high-level decisions to clarify above. For instance, we decide whether QA needs to end-to-end test a given story, or it is better if she increases test coverage at unit level, extending this way tests written by developers. Getting test data often means getting in contact with customers, business analytics and other stakeholders to get access to them. End-to-end tests often require know-how of more experiences testers, business analytics and delivery guys, so we must plan ahead training. Testers often present what the team achieved, so we sometimes plan what tests to prepare for the demo.
Automation is a separate story. We have established methodology and frameworks for testing, so testing means implementing and executing test cases. On other side, we are developing a new test framework, that is a separate development project going in parallel to the main one. This must be planned as well, cannot be done ad hoc, but we still learn from other teams how do plan this effectively.
We usually know what we will be doing in the current or coming release, so planning tests is done ahead for several iterations.
We usually do some test planning with the rest of the team, when iteration starts. We do not always write down all decisions about testing, but we still plan testing process.
We, as testers, sometimes need to support both current iteration and regression tests. So we need to plan how much our capacity can be devoted to current iteration and whether developers can cover our testing tests, if we are too busy with other responsibilities. This is agile, so developers can do some testing, but we still need to plan this to some degree.
Best Answer
I've seen two basic approaches:
Add recurring, timeboxed tasks to each sprint for these kinds of issues. This increases the visibility of the tasks and allows checking whether a specific step or task is Done. The downside is more overhead cloning or recreating these tasks from sprint to sprint.
Just set aside a collective timebox for these tasks each sprint, reducing the available time for implementing stories in the sprint accordingly. This is simpler to manage, but makes it more difficult to manage the individual tasks included in the box.