This is done by adjusting the order of code so that SceneAgent will always be set before adding the client.
Various parts of the code (rightly) assume that a a client registered to the manager will always have a SceneAgent set no matter what.
When an HG avatar enters a scene, it delays processing of entity updates. Could be crowding out by other updates or something else.
This delay in ones own av mvmt updates results in mvmt lag experienced on the client. Avoiding the internal LLClientView for these packets appears to resolve this issue.
Appears most noticeably for avatars with attachments, though has also been seen on those without sometimes. Hasn't been observed for non-HG avatars in general.
Will be investigating exactly what the problem is, at which point there will be a more permanent solution.
Information is also available in "show server throttles" but that's more for non-debug info rather than attempting to get and set parameters on the fly for debug purposes.
This is because objects with lots of parts can have a lot of xml to load into memory, and this has been seen to have a noticeable performance impact.
Whereas streaming has been seen to reduce the impact in normal serialization.
Implmentation is messy but I couldn't see a better way of doing it when you can't assume that you know the exact structure of the input XML.
To do this required GetMesh to be converted to a BaseStreamHandler
Unlike GetTexture connector, no redirect URL functionality yet (this wasn't present in the first place).
This has two aims
1) Reduce initial teleport failures when a foreign Hypergrid user enters a region by not holding up the teleport for attachment rez (this can be particularly costly when HG gets all assets in the object graph.
2) Reduce server load that may impact other simulator activities.
This complements existing JobEngine options that perform initial login attachment rez and appearance send in consecutive tasks.
This was because specifying a max client throttle would always request the max from the parent server throttle, no matter the actual total requests on the client throttle.
This would lead to a lower server multiplier than expected.
This change also adds a 'target' column to the "show throttles" output that shows the target rate (as set by client) if adaptive throttles is active.
This commit also re-adds the functionality lost in recent 5c1a1458 to set a max client throttle when adaptive is active.
This commit also adds TestClientThrottlePerClientAndRegionLimited and TestClientThrottleAdaptiveNoLimit regression tests
This only had one child, which is the 'adaptive' token bucket.
So from testing and currently analysis, we can use that bucket directly which simplifies the code.
This is the same as already done for scene_throttle_max_bps
Internally, the token buckets are in bytes and the other help text makes it clear that the number is bytes per second
(though with the wrong assumption that 1 mbit = 1024 * 1024 bits whereas 1 mbit = 1000 kbits = 1000000 bits)
This is separate from the user-oriented "show throttles" command since one will often only want to know about varying client throttle settings.
Currently displays max scene throttle and adaptive throttles config if set.