We all have(mostly) 32-bit machines at our homes. But the 32-bit machines have a microprocessor in them. I was reading an article about the ARM Cortex. Its a 32-bit microcontroller. Now that intrigued a question inside me. Microcontrollers were made to decrease the external circuitry around a microprocessor, then the microprocessors became more powerful while microcontrollers remained in there 8 bit forms for too long a while. But now that we have 32-bit microcontrollers can't we have like a computer based around those things?
Electronic – Can you base a computer around a 32-bit microcontroller
computersmicrocontroller
Related Solutions
For a one time board, I think an Arduino with a ethernet shield is going to be your easiest path for you. If you are wanting to produce and market you should look into a more specialized/customized solution for the aspect of cost.
Placing the board inside is good because you will be keeping your SPI lines short and have power close by, but putting your board outside will give you less of a concern of shorting to other electronics in the server.
CAN sounds the most applicable in this case. The distances inside a house can be handled by CAN at 500 kbits/s, which sounds like plenty of bandwidth for your needs. The last node can be a off the shelf USB to CAN interface. That allows software in the computer to send CAN messages and see all the messages on the bus. The rest is software if you want to present this to the outside world as a TCP server or something.
CAN is the only communications means you mentioned that is actually a bus, except for rolling your own with I/O lines. All the others are point to point, including ethernet. Ethernet can be made to logically look like a bus with switches, but individual connections are still point to point and getting the logical bus topology will be expensive. The firmware overhead on each processor is also considerably more than CAN.
The nice part about CAN is that the lowest few protocol layers are handled in the hardware. For example, multiple nodes can try to transmit at the same time, but the hardware takes care of detecting and dealing with collisions. The hardware takes care of sending and receiving whole packets, including CRC checksum generation and validation.
Your reasons for avoiding PICs don't make any sense. There are many designs for programmers out there for building your own. One is my LProg, with the schematic available from the bottom of that page. However, building your own won't be cost effective unless you value your time at pennies/hour. It's also about more than just the programmer. You'll need something that aids with debugging. The Microchip PicKit 2 or 3 are very low cost programmers and debuggers. Although I have no personal experience with them, I hear of others using them routinely.
Added:
I see some recommendations for RS-485, but that is not a good idea compared to CAN. RS-485 is a electrical-only standard. It is a differential bus, so does allow for multiple nodes and has good noise immunity. However, CAN has all that too, plus a lot more. CAN is also usually implemented as a differential bus. Some argue that RS-485 is simple to interface to electrically. This is true, but so is CAN. Either way a single chip does it. In the case of CAN, the MCP2551 is a good example.
So CAN and RS-485 have pretty much the same advantages electrically. The big advantage of CAN is above that layer. With RS-485 there is nothing above that layer. You are on your own. It is possible to design a protocol that deals with bus arbitration, packet verification, timeouts, retries, etc, but to actually get this right is a lot more tricky than most people realize.
The CAN protocol defines packets, checksums, collision handling, retries, etc. Not only is it already there and thought out and tested, but the really big advantage is that it is implemented directly in silicon on many microcontrollers. The firmware interfaces to the CAN peripheral at the level of sending and receiving packets. For sending, the hardware does the colllision detection, backoff, retry, and CRC checksum generation. For receiving, it does the packet detection, clock skew adjusting, and CRC checksum validation. Yes the CAN peripheral will take more firmware to drive than a UART such as is often used with RS-485, but it takes a lot less code overall since the silicon handles so much of the low level protocol details.
In short, RS-485 is from a bygone era and makes little sense for new systems today. The main issue seems to be people who used RS-485 in the past clinging to it and thinking CAN is "complicated" somehow. The low levels of CAN are complicated, but so is any competent RS-485 implementation. Note that several well known protocols based on RS-485 have been replaced by newer versions based on CAN. NMEA2000 is one example of such a newer CAN-based standard. There is another automotive standard J-J1708 (based on RS-485) that is pretty much obsolete now with the CAN-based OBD-II and J-1939.
Best Answer
It depends on how you define 'computer'..
On the smaller end of the scale, what you might call traditional micro-controllers, you don't get memory management and seldom see any more RAM than the tiny amount embedded in the chip. I'll admit to very little knowledge about the architecture of the more capable micro-controllers now available, but the existence (or lack thereof) of these features is probably key to distinguishing between a device best suited for embedded applications or for general purpose computation.
By 'memory management' I'm referring to the capability to run programs in virtual address spaces and map these to the physical RAM available in the system, a function carried out by what's usually called a memory management unit (MMU).
Without an MMU, if you try to run multiple processes, all of them are forced to share a single address space, and this means that unless all processes involved adhere to your memory allocation scheme, one process can very easily crash another. So if you're in total control of designing all the processes, as with an embedded system, this isn't as much of a concern. However, if you're trying to support general purpose computation, you can't guarantee that all the code that will be executed will respect the memory allocation scheme, and the system will be rather fragile.
Lack of RAM is also not much of a problem for embedded systems, (1) because there's usually lots of flash, and (2) not being a general purpose computer means you don't have to worry about running un-anticipated programs at the behest of a user. You know ahead of time the sum total of all the software that will run on the system, and only need RAM for variables for that software. When you try to make your system into a general purpose computer, though, users are going to expect to be able to run whatever suits them, and this requires RAM.
Now, it's absolutely fine to do general purpose computation on devices without an MMU, and not much memory. The original 128K RAM, 8088 based (16 bit) IBM PC got away with this, as long as you only needed to run one program at a time.
So if you want to define 'computer' as something like 1982 technology, the answer is definitely yes. Or if you have a closed system where you can mitigate the problems of not having an MMU and/or much ram (e.g., cell phones) by carefully controlling the design of the software, also yes. Or, if your micro-controller has a built-in MMU and gobs of RAM (or can accommodate these externally), you should be able to construct a system that more resembles current computers.