Instead of processing all incoming attachment scene object concurrently, process them consecutively to eliminate potential overload from this source.
This is a naive implementation because it does not currently account for slow foreign asset services.
Although it may take longer, this approach may also improve attachment visibility for HG avatars
since the scene object is now always added to the scene after receiving assets from the foreign service and not before.
This is to avoid user confusion in the oscc rehearsal as they are often not aware that this fails because no e-mail is set.
Also may be failing in the hypergrid case, though this may also be a config issue.
This is meant as a temporary solution.
Unfortunately, it's not currently easy to do this with "max-agent-limit"
- this must be separately set as MaxAgents in region config if it's to persist over restarts.
This current allows one to set two region parameters
agent-limit <int> will set the current root agent limit for the region, as also settable through the viewer, though some impose a max setting (e.g. 100).
max-agent-limit <int> will set the maximum allowed root agent limit. This can also be set via the MaxAgent parameter in region config.
This resolves an issue with pCampbot where some bots would occasionally connect with the same UDP source port.
This sometimes led to console messages where bots would report receiving packets multiple times that weren't marked as resends.
DLLs built under windows
At least on Mono 3.2.8 (but not under Windows), one can bind multiple UDP sockets to the same port by default.
Different simulators cannot demultiplex each other's messages, so a set of confusing non-obvious errors arise if this occurs.
This change prevents such multiple binding.
This shows summary wearables information (shape, hair, etc.) for all avatars in the scene or specific information about a given avatar's wearables.
Similar to the existing "attachments show" command.
In "show throttles", also renames 'total' column to 'actual' to reflect that it is not necessarily the throttles requested for/by the client.
Also fills out 'target' in non-adapative mode to the actual throttle requested for/by the client.
This is done by adjusting the order of code so that SceneAgent will always be set before adding the client.
Various parts of the code (rightly) assume that a a client registered to the manager will always have a SceneAgent set no matter what.
When an HG avatar enters a scene, it delays processing of entity updates. Could be crowding out by other updates or something else.
This delay in ones own av mvmt updates results in mvmt lag experienced on the client. Avoiding the internal LLClientView for these packets appears to resolve this issue.
Appears most noticeably for avatars with attachments, though has also been seen on those without sometimes. Hasn't been observed for non-HG avatars in general.
Will be investigating exactly what the problem is, at which point there will be a more permanent solution.
Information is also available in "show server throttles" but that's more for non-debug info rather than attempting to get and set parameters on the fly for debug purposes.
This is because objects with lots of parts can have a lot of xml to load into memory, and this has been seen to have a noticeable performance impact.
Whereas streaming has been seen to reduce the impact in normal serialization.
Implmentation is messy but I couldn't see a better way of doing it when you can't assume that you know the exact structure of the input XML.
To do this required GetMesh to be converted to a BaseStreamHandler
Unlike GetTexture connector, no redirect URL functionality yet (this wasn't present in the first place).
This has two aims
1) Reduce initial teleport failures when a foreign Hypergrid user enters a region by not holding up the teleport for attachment rez (this can be particularly costly when HG gets all assets in the object graph.
2) Reduce server load that may impact other simulator activities.
This complements existing JobEngine options that perform initial login attachment rez and appearance send in consecutive tasks.
This was because specifying a max client throttle would always request the max from the parent server throttle, no matter the actual total requests on the client throttle.
This would lead to a lower server multiplier than expected.
This change also adds a 'target' column to the "show throttles" output that shows the target rate (as set by client) if adaptive throttles is active.
This commit also re-adds the functionality lost in recent 5c1a1458 to set a max client throttle when adaptive is active.
This commit also adds TestClientThrottlePerClientAndRegionLimited and TestClientThrottleAdaptiveNoLimit regression tests
This only had one child, which is the 'adaptive' token bucket.
So from testing and currently analysis, we can use that bucket directly which simplifies the code.