UI design is no different than architecture or coding or testing -- it's part of the scrum process. Some say that in an ideal world the team will do all of that in a sprint -- but only enough UI design for that sprint. Others are more pragmatic and say it can't be done, and will do design in one sprint and coding and testing in another, or design and coding in one and testing in another.
A common pattern is to do design work / UI stories in one sprint and code in the next. After all, if you don't know what the UI is like there's no way you can estimate how much effort will be involved in coding. Consider the difference in the home page for google.com vs bing.com -- both have the same primary function, but bing.com has a considerably more elaborate UI design.
That being said, estimates are just that: estimates. Nothing in scrum requires that they are accurate only that they are consistent over time. Whether you estimate a story as a 5 or a 13 doesn't matter as long as similar stories are estimated the same way from sprint to sprint.
So, do what the team thinks is right -- do a UI story before doing the development, or do them together. If the PO says they have to be done at the same time, make sure you factor into your estimates all of the uncertainty. Do your best to estimate, knowing it will be wrong. Learn from that in your retrospective and talk about what worked and what didn't. Then, adjust your procedures going forward.
Above all, don't expect the estimates for the first half-dozen sprints or so to be the least bit accurate. Over time, however, regardless of the strategy you should start to do approximately the same number of points each sprint, which the PO can then use to predict how many stories you can do in a sprint.
The first three points are capacity planning. The organization is trying to budget and predict for the future. Alas, there is no simple or accepted way to predict performance and scalability. Each application and environment is different. Therefore, the best way to answer this is to measure.
Specifically:
- Discuss with your management or product owners what the likely growth in users will be and the types of different users. If they do not know, guess but document that these are guesses.
- Create an automated run through of common paths of your application. You can record activity or enter your own into load testing applications like JMeter.
- Create a test environment that matches your current or projected hardware. Pay close attention to things like bandwidth, storage, SSL, logging or other frequently forgotten aspects that could affect performance. Mock out the third party image service if you can, using smaller or representative images.
- Use the load testing application to create the proposed for the projected numbers of users at different times.
- Use an application performance management tool, like AppDynamics or DynaTrace, to measure performance and identify bottlenecks.
In addition to above requirements, this can help you:
- Confirm your environment supports the requested load.
- Find the maximum load your environment supports.
- Find the bottleneck(s) limiting your performance or scalability.
- Experiment with different configurations to see how the perform or scale.
- Observe how the system copes when you trigger failures.
The last two points, HA requirement (high availability) and DR (disaster recovery, presumably RPO (recovery point objective) and RTO (recovery time objective)), are harder to predict as these are really business requirements. Discuss with your management or product owners the likely failures and how much they will cost to mitigate or fix. If both of you are new to this, expect lots of guessing and late nights on your part.
Best Answer
Standard Agile way: