* Changed the Send*Data structs in IClientAPI to use public readonly members instead of private members and getters
* Made Parallel.ProcessorCount public
* Started switching over packet building methods in LLClientView to use Util.StringToBytes[256/1024]() instead of Utils.StringToBytes()
* More cleanup of the ScenePresences vs. ClientManager nightmare
* ScenePresence.HandleAgentUpdate() will now time out and drop incoming AgentUpdate packets after three seconds. This fixes a deadlock on m_AgentUpdates that was blocking up the LLUDP server
* Handle the AgentFOV packet
* Bypass queuing and throttles for ping checks to make ping times more closely match network latency
* Only track reliable bytes in LLUDPCLient.BytesSinceLastACK
* Changed the throttling logic to obey the requested client bandwidth limit but also share bandwidth between some of the categories to improve throughput on high prim or heavily trafficked regions
* OnQueueEmpty is still called async, but will not be called for a given category if the previous callback for that category is still running. This is the most balanced behavior I could find, and seems to work well
* Added support for the old [ClientStack.LindenUDP] settings (including setting the receive buffer size) and added the new token bucket and global throttle settings
* Added the AssetLoaderEnabled config variable to optionally disable loading assets from XML every startup. This gives a dramatic improvement in startup times for those who don't need the functionality every startup
* Changed the way the QueueEmpty callback is fired. It will be fired asynchronously as soon as an empty queue is detected (this can happen immediately following a dequeue), and will not be fired again until at least one packet is dequeued from that queue. This will give callbacks advanced notice of an empty queue and prevent callbacks from stacking up while the queue is empty
* Added LLUDPClient.IsConnected checks in several places to prevent unwanted network activity after a client disconnects
* Prevent LLClientView.Close() from being called twice every disconnect
* Removed the packet resend limit and improved the client timeout check
* Added some missing implementations of IClientAPI.RemoteEndPoint
* Added a ClientManager.Remove(UUID) overload
* Removed a reference to a missing project from prebuild.xml
* Changed the ClientManager interface to reduce potential errors with duplicate or mismatched keys
* Added IClientAPI.RemoteEndPoint, which can (hopefully) eventually replace IClientAPI.CircuitCode
* Changed the order of operations during client shutdown
* Removed the confusing (and LL-specific) shutdowncircuit parameter from IClientAPI.Close()
* Updated the LLUDP code to only use ClientManager instead of trying to synchronize ClientManager and m_clients
* Remove clients asynchronously since it is a very slow operation (including a 2000ms sleep)
* Move ViewerEffect handling to Scene.PacketHandlers
* Removing the unused CloseAllAgents function
* Trimming ClientManager down. This class needs to be reworked to keep LLUDP circuit codes from intruding into the abstract OpenSim core code
* Changing OpenSimUDPBase back to high concurrency. High concurrency mode seems to make other problems happen faster, so it's helpful for working out other bugs and will probably
* Allow the UDP server to bind to a user-specified port again
* Updated to a newer version of OpenSimUDPBase that streamlines the code even more. This also reintroduces the highly concurrent packet handling which needs more testing
* Clients are no longer disconnected when a packet handler crashes. We'll see how this works out in practice
* Added documentation and cleanup, getting ready for the first public push
* Deleted an old LLUDP file
* Replaced logic in ThreadTracker with a call to System.Diagnostics that does the same thing
* Added Util.StringToBytes256() and Util.StringToBytes1024() to clamp output at byte[256] and byte[1024], respectively
* Fixed formatting for a MySQLAssetData error logging line
option for LLUDPServer. On windows .NET the default socket receive
buffer size is 8192 bytes, on recent linux systems it's about
111K. both value can be a bit small for an OpenSim instance serving
many clients. The socket receive buffer size can be configured via
an OpenSim.ini config option
- adds a general catch clause to LLUDPServer.OnReceivedData() to
prevent it submerging when an unexpected Exception occurs.
This may break a lot of things, but it needs to go in. It was tested in standalone and the UCI grid, but it needs a lot more testing.
Known problems:
* HG asset transfers are borked for now
* missing texture is missing
* 3 unit tests commented out for now
* WebStatsModule doesn't crash on restart. GodsModule doesn't crash when there is no Dialog Module. LLUDPServer doesn't crash when the Operation was Aborted.
* ODEPlugin does 'Almost NaN' sanity checks.
* ODEPlugin sacrifices NaN avatars to the NaN black hole to appease it and keep it from sucking the rest of the world in.
These changes replace all direct references to the AssetCache with
IAssetCache. There is no change to functionality. Everything works as
before.
This is laying the groundwork for making it possible to register
alternative asset caching mechanisms without disrupting other parts of
OpenSim or their dependencies upon AssetCache functionality.
OpenSim.Region.Environment into a "framework" part and a modules only
part. This first changeset refactors OpenSim.Region.Environment.Scenes,
OpenSim.Region.Environment.Interfaces, and OpenSim.Region.Interfaces
into OpenSim.Region.Framework.{Interfaces,Scenes} leaving only region
modules in OpenSim.Region.Environment.
The next step will be to move region modules up from
OpenSim.Region.Environment.Modules to OpenSim.Region.CoreModules and
then sort out which modules are really core modules and which should
move out to forge.
I've been very careful to NOT BREAK anything. i hope i've
succeeded. as this is the work of a whole week i hope i managed to
keep track with the applied patches of the last week --- could any of
you that did check in stuff have a look at whether it survived? thx!
* Really this should be 1, but I think that this would be too slow compared to a Second Life server until we improve our ability to send textures of variable quality
* This may improve one aspect of sim performance where there are many avatars. However, there are still other performance problems that are unrelated to this change
* Value may be further tuned
* Removed temporary decals since the multipler setting will stick around now
* This should probably be 1, but currently by default it is 8, to reflect what was being eon3 in OpenSim before this revision. So if the client requested a maximum throttle
of 1500 kilobits per second, we would actually send out 1500 kilobytes per second
* Adjusting this multiplier down towards 1 may improve your OpenSim experience, though in other situations it may degrade (e.g. if you're using a standalone over high bandwidth
links)
* This is currently a user setting because adjusting it down may currently reveal other OpenSim bugs.
* This may help us detect if mysterious UDP disconnects are happening because of this.
* Shouldn't be any functional change but I would appreciate a buddy check from Teravus if he has time (as for all client stack changes)
* This moves authentication from the client thread (where failure was difficult to detect) to the particular thread handling that packet
* I've kept the authentication outside of the crucial clientCircuits lock (though any delay here is probably swamped by the other delays associated with login)
* Also added more to the unit test to ensure this doesn't regress
* Not sure why things still worked in the presence of this bug - possibly the problem is compensated for later on. If you are having udp session problems this bug fix may help
(though no guarantees).
* Guys, there's an endless loop there *ON PURPOSE*. Please don't try to *fix* it. We must continue to process the UDP stream buffer on clients that disconnected nastily until it ends or the UDP server accept thread will die a horrible death.
* Regarding an earlier change, I think it would be possible to eliminate the creation of new IPEndPoints on every end receive if we did the client circuit lookup before starting
the next receive. However, this would be a performance trade off and hence not worth trying without performance testing
* This widened what I think is an existing race condition where asynchronous recieves could potentially stomp on each other's end points (though this must occur very rarely, if at
all, in reality)
* This allows multiple user profile providers to be specified in OpenSim.ini separated by commas
* If multiple providers are specified then a request for a user profile will query each in turn until the profile is either found or all have been queried
* Unfortunately I don't believe this order can currently be specified, which if true is something that will need to be fixed.
* Thanks to smeans for the original patch.
* Experimenting with the PacketPool mechanism.
* It's still disabled in the code, however there's now a flag to enable it.
* Converted to use Generic Collections vs Hashtables, also now uses a list of 'OK to pool' packets, starting with the high volume PacketAck packet.
* Switched it on by default
* Updated OpenSim.ini.example to reflect this
* Caught a UDP Server issue that occurs when the network pipe is saturated
* Still experimental :D
Floating text, Rotation, Texture animation, Particle System
This will make "Eye Candy" scripts work without modification in
XEngine. The use of the CHANGED_REGION_RESTART hack is no longer
needed. Implemented in MySQL only, hovertext also in SQLite.
* This is a HUGE OMG update and will definitely have unknown side effects.. so this is really only for the strong hearted at this point. Regular people should let the dust settle.
* This has been tested to work with most basic functions. However.. make sure you back up 'everything' before using this. It's that big!
* Essentially we're back at square 1 in the testing phase.. so lets identify things that broke.