I seem to be finding conflicting information.There is a single IETF standard that describes how TCP functions. As such, it is probably just a matter of wording that causes you to think that what you found conflicts with each other.
The tutorials typically use a TcpListener, which is told to listen on a particular port. Let's say 51000. Then in the examples, if this socket finds a client, a TcpSocket is set up (which inherits port number 51000), and the Listener continues to listen. As each client connects, local port number 51000 is reused in the local socket. So we end up with a lot of sockets where the local address and port are the same as each other.TCP connections are identified by a 4-tuple, both endpoint addresses and their port numbers. As long as one of these values are different, you have a separate connection. As such you can have connections to as many hosts as you want all using the same local address and port, if their remote addresses or port numbers are different they are all different connections. You can even have multiple connections between 2 hosts even if 1 of them is using the same port for all connections. A typical example of this is an HTTP server. Your browser typically opens up multiple simultaneous connections to the same server listening on port 80 and yet it all works out of the box.
According to http://en.wikipedia.org/wiki/Internet_socketThe sentence that comes after that is basically what I said above.
"A server may create several concurrently established TCP sockets with the same local port number and local IP address, each mapped to its own server-child process, serving its own client process."
So this all sounds ok. However, I found this post: https://github.com/laurentgomila/sfml/issues/150 where it was stated (although it was 2 years ago) that the behaviour of this arrangement is "basically undefined", and that different operating systems will behave differently. This makes it sound like I want to avoid the setup I've just described above.This is barely if at all related to your question. The issue that was described there is about whether TCP sockets should be able to reuse addresses (here on consisting of address and port number) that the operating system still has marked as "used". This makes sense if say your server for some sad reason crashes and you have a script to automatically restart it. Because proper cleanup could not be performed, the socket lingers around for a bit being unable to be reused by new processes UNLESS as the flag SO_REUSEADDR suggests, you permit the TCP implementation to bind to addresses the operating system still has marked as "used". If you know that in reality nothing else is really using the address this is no problem. However, if you start up a second server instance without any explicit checks to make sure only a single instance can run at a time you might end up having 2 processes listening on the same socket. This can lead to tricky situations which you shouldn't have to worry about right now.
So my first question is: what's the latest on this? Is it ok to have a host listening to lots of clients all through the same port, or is it not?This is the one and only way to do it. If you want clients to be able to connect to a server, they need to know what port to send connection requests to. Changing it all the time would lead to the clients having to guess, and that is not a good experience for the end user ;). Some software negotiate and open up separate TCP connections for other data streams, using different ports on both hosts (see FTP) but this is probably something you shouldn't have to worry about either.
If on the host I have configured the socket to be non-blocking, and I call the receive function, I can expect sometimes that received will be less than I was expecting? Lets say I only receive 90 bytes (out of the 100 I need to get my full "packet"). In this case, what's the best way to handle it? Should I wait until the next time I call receive, and then use the data to fill in the rest of my struct? Is this the point of the Packet class?If you use the "raw" (meaning byte-wise) receiving functions then packet assembly is left up to you to do. This is true regardless of whether you are in blocking or non-blocking mode, you must always be ready to receive less than you sent and have to wait a bit for the rest. As you already mentioned TCP is a stream-oriented protocol. You get a stream of data without any indication of where something starts or ends. You can insert markers or use some other system of getting this done, however buffering data so that the rest of the application only has to deal with complete packets is all left for you to do. sf::Packet does this automatically through a simple internal buffering mechanism which I won't describe here (you are welcome to look in the source code to see how this is done though). When you use sf::TcpSocket::receive( sf::Packet& ); it is guaranteed that the packet will be complete if the method returns sf::Socket::Done. In any other case, you must assume that something didn't work out right and you need to retry receiving into the packet again or execute error handling measures. Unless sf::Socket::Done was returned, the packet will be empty after returning from that method.
It's not just easier, it's also more correct. The potential issues with endianness and types size are explained in the packet tutorial.