Also closes behaviours on disconnect instead of interrupt, though this makes no practical difference.
If existing behaviour is None, other added behavious will not take affect until None is removed (as this is an infinite wait until interrupted).
If n > 1 for RootTerseUpdatePeriod only every n terse update is actually sent to observers on same region, unless velocity is effectively zero (to stop av drift).
If n > 1 for ChildTerseUpdatePeriod only every n terse update is sent to observers in other regions, unless velocity is effectively zero.
Defaults are same as before (all packets are sent).
Tradeoff is reduction of UDP traffic vs fidelity of observed av mvmt.
Increasing n > 1 leads to jerky observed mvmt immediateley for root, though not on child, where experimentally have gone to n = 4 before jerkiness is noticeable.
Rapid polls are more expensive than triggered events (several polls vs one trigger) and may be problematic on heavily loaded simulators where many threads are vying for processor time.
A triggered event is also slightly quicker as there is no maximum 200ms wait between polls.
This is already going to be correctly set by WaitForUpdateAgent() earlier on in that method, which is always called where a callback to the originating region is required.
This kind of polling is very expensive with many bots/polling threads and appears to be the primary cause of bot falloff from the client end at higher loads.
Where inbound packet threads can't run in time due to contention and simulator disconnect timeout occurs.
This adds the "show stats", "stats record", etc. commands and information on available Threadpool threads, etc.
It also adds the Watchdog which logs warnings if time between executions is unexpectedly large.
This is to avoid issues where many bots connect to a single end point with multiple regions, where each region requires a long-lived poll connection for each bot.
This lock serialized all requests and made the inventory throttling in WebFetch redundant.
By moving this lock, two simultaneous requests may now take place which may help with http://opensimulator.org/mantis/view.php?id=7054
Experimentally, on the Linden Lab grid the avatar can rotate slightly before triggering AvatarUpdates, whereas this is practically impossible in OpenSimulator.
These updates allow other avatars to see rotations, though sensitivity is low since other avatars can only be seen in one of 8 body rotations.
This commit changes sensitivity from 0.01 to 0.1, which better matches LL and reduces UDP traffic which has a beneficial impact on network and CPU load.
This has no impact on rotations in the simulator itself so simulation fidelity is the same as before.
To change this setting back for test/other purposes, edit RootRotationUpdateTolerance in the [InterestManagement] section of OpenSim.ini
Running out of such threads under heavy load causes delayed packet processing which can lead to spurious UDP resends and knock on issues.
We already massively boost the min/max builtin pool worker and IOCP threads (which even with STP are still used for inbound network requests) without obvious adverse effects.
The threads are only instantiated if they are required.
This change does not affect other async_call_method options.
Move the experimental extra features functionality into the GridService. This sends default values for map, search and destination guide, plus ExportSupported control to the region on startup. Please watch http://opensimulator.org/wiki/SimulatorFeatures_Extras for changes and documentation.
These govern when AgentUpdates are sent to observers on position, rotation and velocity changes to an avatar (including the avatar themselves).
Higher values reduce AgentUpdate traffic but at a certain level will degrade smoothness of avatar and perceived avatar movement.
This covers event queue setup messages and some outgoing messages (e.g. EnableSimulator)
In my experience these messages are only useful if you really know what they mean and you're looking for them
Otherwise, they're quite spammy.
Event queue DebugLevel 1 is enabled with the "debug eq 1" console command
On non-HG this is on the only recognized failure state so we can return more information in the error result.
On HG there are multiple failure states which would require more work to distinguish, so currently return the unsatisfying "Internal Error" like some other existing calls.
This may have been the trigger CheckSendingPatchesToClients() dictionary out of sync exceptions in today's load test.
Don't need to check ContainsKey() since Remove() returns false on a request to remove a key that it doesn't have
Allows experiments in manually reducing updates under heavy load.
Activated by "debug scene set client-upd-per" console command.
In a simple test, can send as few as every 4th update before observed movement starts becoming disturbingly rubber-banded.
Corresponds to ResendAppearnceUpdates setting in [Appearance] in OpenSim.ini
This was originally implemented to alleviate cloud appearance problems but could be too expensive with large numbers of avatars.
This governs when child agent position changes are sent to neighbouring regions.
Corresponding config parameter is ChildReprioritizationDistance in [InterestManagement] in OpenSim.ini
For test purposes.
This gives a count of all requests made to the remote inventory service.
This is finer grained than inventory.httpfetch.ProcessedFetchInventoryRequests since such a request can be comprised of many individual inv service calls.
In addition, this will count requests that don't go through the HTTP inventory fetch (e.g. HG, archiving, etc.)