You've certainly done your research...
From all of my experience with ansible what you're looking to accomplish, isn't supported. As you mentioned, ansible states that it does not require passwordless sudo, and you are correct, it does not. But I have yet to see any method of using multiple sudo passwords within ansible, without of course running multiple configs.
So, I can't offer the exact solution you are looking for, but you did ask...
"So... how are people using Ansible in situations like these? Setting
NOPASSWD in /etc/sudoers, reusing password across hosts or enabling
root SSH login all seem rather drastic reductions in security."
I can give you one view on that. My use case is 1k nodes in multiple data centers supporting a global SaaS firm in which I have to design/implement some insanely tight security controls due to the nature of our business. Security is always balancing act, more usability less security, this process is no different if you are running 10 servers or 1,000 or 100,000.
You are absolutely correct not to use root logins either via password or ssh keys. In fact, root login should be disabled entirely if the servers have a network cable plugged into them.
Lets talk about password reuse, in a large enterprise, is it reasonable to ask sysadmins to have different passwords on each node? for a couple nodes, perhaps, but my admins/engineers would mutiny if they had to have different passwords on 1000 nodes. Implementing that would be near impossible as well, each user would have to store there own passwords somewhere, hopefully a keypass, not a spreadsheet. And every time you put a password in a location where it can be pulled out in plain text, you have greatly decreased your security. I would much rather them know, by heart, one or two really strong passwords than have to consult a keypass file every time they needed to log into or invoke sudo on a machine.
So password resuse and standardization is something that is completely acceptable and standard even in a secure environment. Otherwise ldap, keystone, and other directory services wouldn't need to exist.
When we move to automated users, ssh keys work great to get you in, but you still need to get through sudo. Your choices are a standardized password for the automated user (which is acceptable in many cases) or to enable NOPASSWD as you've pointed out. Most automated users only execute a few commands, so it's quite possible and certainly desirable to enable NOPASSWD, but only for pre-approved commands. I'd suggest using your configuration management (ansible in this case) to manage your sudoers file so that you can easily update the password-less commands list.
Now, there are some steps you can take once you start scaling to further isolate risk. While we have 1000 or so nodes, not all of them are 'production' servers, some are test environments, etc. Not all admins can access production servers, those than can though use their same SSO user/pass|key as they would elsewhere. But automated users are a bit more secure, for instance an automated tool that non-production admins can access has a user & credentials that cannot be used in production. If you want to launch ansible on all nodes, you'd have to do it in two batches, once for non-production and once for production.
We also use puppet though, since it's an enforcing configuration management tool, so most changes to all environments would get pushed out through it.
Obviously, if that feature request you cited gets reopened/completed, what you're looking to do would be entirely supported. Even then though, security is a process of risk assessment and compromise. If you only have a few nodes that you can remember the passwords for without resorting to a post-it note, separate passwords would be slightly more secure. But for most of us, it's not a feasible option.
You can access pretty much any inventory facts/variables by doing something like this:
{{ hostvars['foo.example.com']['ansible_eth0']['ipv4']['address'] }}
or, if you want to do it via an index into a group:
{{ hostvars[groups['collectors'][0]]['ansible_eth0']['ipv4']['address'] }}
The big trick is that you need to collect the facts for all the hosts/groups you're interested in. So you would want to modify your playbook that runs against the reporters group to include a no-op (dummy) task that is applied to the collectors group. That will cause Ansible to collect facts about the collectors hosts so that they can be accessed from the reporters group. So you might want to add something like this to the top of your reporters playbook:
- hosts: collectors
name: Gather facts from collectors
tasks: [ ]
The empty brackets basically mean that no tasks will be executed, but this will still force Ansible to gather facts about the collectors so that you can then reference them in the tasks that you run against your reporters.
Edit #1
It occurred to me that I should also mention that as of version 1.8 of Ansible, there is a fact-caching feature that is now available. Fact caching relies on a redis server to store facts between playbook runs. With it enabled, one playbook can reference facts that were obtained by another playbook that was run previously. The example the Ansible documentation gives:
Imagine, for instance, a very large infrastructure with thousands of hosts. Fact caching could be configured to run nightly, but configuration of a small set of servers could run ad-hoc or periodically throughout the day. With fact-caching enabled, it would not be necessary to “hit” all servers to reference variables and information about them.
Best Answer
Define the user and password in host_vars per each host or group_vars per host group.
You need to define appropriate parameters listed here, i.e.:
Per guidance you should encrypt the values with Ansible Vault.