A user story is typically created from a need expressed by the client or potential user of the system. It's often of the format "As a {role}, I want {goal} so that {benefits}". The collective set of user stories capture the functionality desired in the system that is being built. The customer or customer representative prioritizes each user story, typically based on the value added by having the functionality specified in the story.
Once written, user stories are sized and estimated. There are a number of techniques to do this. The most common method of estimation that I've seen is the amount of effort needed to complete the task, in arbitrary values. There's a base unit that everyone can agree on, and use this as a common framework for providing estimates as to the effort required. I've seen these as being unitless values called "story points", but I don't see why you couldn't also estimate the user story in hours. The key is to be consistent across all user stories.
For the first iteration, the team estimates how many story points they can complete in a given iteration and move up to that number of points into the backlog for an iteration. If you are estimating in hours, then you can determine how many hours your development team will dedicate to the project during the iteration and pull down that many hours worth of work. After the iteration, you determine how many points or hours that you actually completed and pull down that amount of work for the next iteration.
During the entire process, your overall backlog of stories is changing. Features might be removed, new features can be added, or the priority can be changed. However, none of this affects the work that was pulled down for the current iteration. Only between iterations should you adjust what you are working on. You will typically either have an on-site customer representative or someone who can act as voice of the customer and is in contact with the appropriate people from the customer's organization. They are continually refining the requirements and acceptance criteria throughout the project.
How you further break down user stories into tasks is up to you. It might be an undocumented preference of the engineer, or there might be a detailed analysis of exactly what each user story entails. That's something that needs to be specified by tailoring the process to meet the needs of your organization, team, and project.
You should have a definition of done, which can be used to determine when a particular user story is shippable. This defines everything from design, implementation, testing, quality assurance, acceptance criteria, and documentation. You can specify what tools and methods you use to ensure that a given feature as specified by a user story is done. Once a user story is done and integrated, the product should be in a potentially-shippable state, meaning that packaging it and delivering it to the customer would add some value to their operations or meet some of their needs.
Ultimately, you need to tailor the processes to work for your organization, team, and project. Doing anything "by the book" is usually a recipe for problems. Just because something has been documented and works well for certain teams working on certain projects doesn't mean that it fits everything that you need it to do.
You might be interested in this InfoQ article on user story estimation as well as Scott Ambler's Introduction to User Stories.
There is no hard and fast rule about assigning User Stories within the tool of your choice (some tools may not even support it). It all comes down to how you are using your tool.
If you are using the User Story items merely as a container for Task items, without any defined states or transitions, then it is most logical to leave the User Stories unassigned.
It you transition the User Story items from Open to In Progress to Done along with the Tasks they contain, then you might assign the User Story to the person responsible for keeping the User Story state consistent with the states of the underlying Tasks (which can even be the scrum master), or you can leave it unassigned if that is a team responsibility.
Best Answer
Ideally, your software should be bug-free after each iteration, and fixing bugs should be part of each sprint, so the work required to fix bugs should be considered when assigning story points (i.e., a task that is more likely to produce bugs should have more story points assigned to it).
In reality, however, bugs surface post-deployment all the time, no matter how rigid your testing; when that happens, removing the bug is just another change, a feature if you will. There is no fundamental difference between a bug report and a feature request in this context: in both cases, the application shows a certain behavior, and the user (or some other stakeholder) would like to see it changed.
From a business perspective, bugfixes and features are also the same, really: either you do it (scenario B), or you don't (scenario A); both scenarios have costs and benefits attached, and a decent business person will just weigh them up and go with whatever earns them more profit (long-term or short-term depending on business strategy).
So yes, by all means, assign story points to bugs. How else are you going to prioritize bugs vs. features, and bugs against bugs? You need some measure of development effort for both, and it better be comparable.
The biggest problem with this is that bugfixes are often harder to estimate: 90% or more of the actual effort lies in finding the cause; once you have found it, you can come up with an accurate estimate, but it is almost impossible to judge how long the search will take. I've even seen a fair share of bugs where most of the time was spent just trying to reproduce the bug. On the other hand, depending on the nature of the bug, it is often possible to narrow down the search with some minimal research before making an estimate.