Internal networks often use 1 Gbps connections, or faster. Optical fiber connections or bonding allow much higher bandwidths between the servers. Now imagine the average size of a JSON response from an API. How much of such responses can be transmitted over a 1 Gbps connection in one second?
Let's actually do the math. 1 Gbps is 131 072 KB per second. If an average JSON response is 5 KB (which is quite a lot!), you can send 26 214 responses per second through the wire with just with one pair of machines. Not so bad, isn't it?
This is why network connection is usually not the bottleneck.
Another aspect of microservices is that you can scale easily. Imagine two servers, one hosting the API, another one consuming it. If ever the connection becomes the bottleneck, just add two other servers and you can double the performance.
This is when our earlier 26 214 responses per second becomes too small for the scale of the app. You add other nine pairs, and you are now able to serve 262 140 responses.
But let's get back to our pair of servers and do some comparisons.
If an average non-cached query to a database takes 10 ms., you're limited to 100 queries per second. 100 queries. 26 214 responses. Achieving the speed of 26 214 responses per second requires a great amount of caching and optimization (if the response actually needs to do something useful, like querying a database; "Hello World"-style responses don't qualify).
On my computer, right now, DOMContentLoaded for Google's home page happened 394 ms. after the request was sent. That's less than 3 requests per second. For Programmers.SE home page, it happened 603 ms. after the request was sent. That's not even 2 requests per second. By the way, I have a 100 Mbps internet connection and a fast computer: many users will wait longer.
If the bottleneck is the network speed between the servers, those two sites could literally do thousands of calls to different APIs while serving the page.
Those two cases show that network probably won't be your bottleneck in theory (in practice, you should do the actual benchmarks and profiling to determine the exact location of the bottleneck of your particular system hosted on a particular hardware). The time spent doing the actual work (would it be SQL queries, compression, whatever) and sending the result to the end user is much more important.
Think about databases
Usually, databases are hosted separately from the web application using them. This can raise a concern: what about the connection speed between the server hosting the application and the server hosting the database?
It appears that there are cases where indeed, the connection speed becomes problematic, that is when you store huge amounts of data which don't need to be processed by the database itself and should be available right now (that is large binary files). But such situations are rare: in most cases, the transfer speed is not that big compared to the speed of processing the query itself.
When the transfer speed actually matters is when a company is hosting large data sets on a NAS, and the NAS is accessed by multiple clients at the same time. This is where a SAN can be a solution. This being said, this is not the only solution. Cat 6 cables can support speeds up to 10 Gbps; bonding can also be used to increase the speed without changing the cables or network adapters. Other solutions exist, involving data replication across multiple NAS.
Forget about speed; think about scalability
An important point of a web app is to be able to scale. While the actual performances matter (because nobody wants to pay for more powerful servers), scalability is much more important, because it let you to throw additional hardware when needed.
If you have a not particularly fast app, you'll lose money because you will need more powerful servers.
If you have a fast app which can't scale, you'll lose customers because you won't be able to respond to an increasing demand.
In the same way, virtual machines were a decade ago perceived as a huge performance issue. Indeed, hosting an application on a server vs. hosting it on a virtual machine had an important performance impact. While the gap is much smaller today, it still exists.
Despite this performance loss, virtual environments became very popular because of the flexibility they give.
As with the network speed, you may find that VM is the actual bottleneck and given your actual scale, you will save billions of dollars by hosting your app directly, without the VMs. But this is not what happens for 99.9% of the apps: their bottleneck is somewhere else, and the drawback of a loss of a few microseconds because of the VM is easily compensated by the benefits of hardware abstraction and scalability.
There is nothing that explicitly forbids or argues against using stored procedures with microservices.
Disclaimer: I don't like stored procedures from a developer's POV, but that is not related to microservices in any way.
Stored procedures typically work on a monolith database.
I think you're succumbing to a logical fallacy.
Stored procedures are on the decline nowadays. Most stored procedures that are still in use are from an older codebase that's been kept around. Back then, monolithic databases were also much more prevalent compared to when microservices have become popular.
Stored procs and monolithic databases both occur in old codebases, which is why you see them together more often. But that's not a causal link. You don't use stored procs because you have a monololithic database. You don't have a monolithic database because you use stored procs.
most books on microservices recommend one database per microservice.
There is no technical reason why these smaller databases cannot have stored procedures.
As I mentioned, I don't like stored procs. I find them cumbersome and resistant to future maintenance. I do think that spreading sprocs over many small databases further exacerbates the issues that I already don't like. But that doesn't mean it can't be done.
again most microservice architecture books state that they should be autonomous and loosely coupled. Using stored procedures written say specifically in Oracle, tightly couples the microservice to that technology.
On the other side, the same argument can be made for whatever ORM your microservice uses. Not every ORM will support every database either. Coupling (specifically its tightness) is a relative concept. It's a matter of being as loose as you can reasonably be.
Sprocs do suffer from tight coupling in general regardless of microservices. I would advise against sprocs in general, but not particularly because you're using microservices. It's the same argument as before: I don't think sprocs are the way to go (in general), but that might just be my bias, and it's not related to microservices.
most msa books (that I have read) recommend that microservices should be business oriented (designed using ddd). By moving business logic into stored procedures in the database this is no longer the case.
This has always been my main gripe about sprocs: business logic in the database. Even when not the intention, it tends to somehow always end up that way.
But again, that gripe exists regardless of whether you use microservices or not. The only reason it looks like a bigger issue is because microservices push you to modernize your entire architecture, and sprocs are not that favored anymore in modern architectures.
Best Answer
If you really need the same User object/model/record in most of your microservices, you probably don't need separate microservices at all OR they ought to be object-agnostic as far as their knowledge goes. Thusly, each microservice manages a sub-type of a User model with the data that's needed for its functionality, with a key that's used to distinguish that User. In an ideal world, that way you can perform the necessary mutations and propagate changes throughout the system through a event broker of some sort (assuming you have a event-driven architecture), otherwise, each service would have to be aware of some (or all) other services and what they're meant to do and which types of objects they expect to receive.
Keep in mind that in a true microservice environment, each microservice is expected to have its own data store which is isolated from the rest of the system, backing up my initial statement that you should dump the concept of a User entity completely.
For instance, a SendNotification microservice would only receive an object with a websocket identifier (which is passed throughout services, eventually reaching this one) and the data that's supposed to be returned to the user, and it'd just perhaps verify that the needed fields are there and push the notification to the appropriate channel by the WS identifier. It doesn't have to know which types of objects it handles at all.