This is giving much better results on teleports between simulators over my lan where for some reason there is a pause before the receiving simulator processes UpdateAgent()
At this point, v2 teleports between neighbour and non-neighbour regions on a single simulator and between v2 simulators and between a v1 and v2 simulator
are working okay for me in different scenarios (e.g. simple teleport, teleport back to original quickly and re-teleport, teleport back to neighbour and re-teleport. etc.)
This was occuring because teleport to B did not set DoNotCloseAfterTeleport on A as it was a neighbour (where it isn't set to avoid the issue where the source region doesn't send Close() to regions that are still neighbours (hence not resetting DoNotCloseAfterTeleport).
Fix here is to still set DoNotCloseAfterTeleport if scene presence is still registered as in transit from A
The root cause was that v2 was only closing neighbour agents if the root connection also needed a close.
However, fixing this requires the neighbour regions also detect when they should not close due to re-teleports re-establishing the child connection.
This involves restructuring the code to introduce a scene presence state machine that can serialize the different add and remove client calls that are now possible with the late close of the
This commit appears to fix these issues and improve teleport, but still has holes on at least quick reteleporting (and possibly occasionally on ordinary teleports).
Also, has not been completely tested yet in scenarios where regions are running on different simulators
This fixes the problem of avatars bouncing when logged in.
Added a little height to the avatar height fudges to eliminate a problem
of feet being in the ground a bit.
For unknown reasons, a dynamic function signature cannot have more than 5
parameters. Error message now tells you this fact so you can curse MS and
then go change your function definitions.
Disabled by default. Enable by setting
[Startup]ManagedStatsRemoteFetchURI="Something"
and thereafter "http://ServerHTTPPort/Something/" will return all the managed
stats (equivilent to "show stats all" console command).
Accepts queries "cat=", "cont=" and "stat=" to specify statistic category,
container and statistic names. The special name "all" is the default and returns
all values in that group.
This stops OpenSimulator still trying to teleport the user if they hit cancel on the teleport screen or closed the viewer whilst the protocol was trying to create an agent on the remote region.
Ideally, the code may also attempt to tell the destination simulator that the agent should be removed (accounting for issues where the destination was not responding in the first place, etc.)
This can currently only be activated with the console command "debug stats record start".
Off by default.
Records to file OpenSimStats.log for simulator and RobustStats.log for ROBUST
This is for testing and debugging purposes to help determine whether a particular issue may be teleport related or not
"SIMULATION/0.2" (the newer teleport protocol) remains the default. If the source simulator only implements "SIMULATION/0.1" this will correctly allow fallback to the older protocol.
Specifying "SIMULATION/0.1" will force the older, less efficient protocol to always be used.
scripts as boxed integers rather than proper reference to a new LSLInteger.
This fixes an exception when using a registered integer constant in
a script.
This is an experimental setting to control cpu spikes when an attachment heavy avatar logs in or avatars with medium attachments lgoin simultaneously.
It inserts a ms sleep specified in terms of attachments prims after each rez when an avatar logs in.
Default is 0 (no throttling).
"debug attachments <level>" changes to "debug attachments log <level>" which controls logging. A logging level of 1 will show the throttling performed if applicable.
Also adds "debug attachments status" command to show current throttle and debug logging levels.
What I believe is happening is that on initial terrain send, this is done one packet at a time.
With WaitOne, the outbound loop has enough time to loop and wait again after the first packet before the second, leading to a slower send.
This approach instead does not wait if a packet was just sent but instead loops again, which appears to lead to a quicker send without losing the cpu benefit of not continually looping when there is no outbound data.
cap is something other than "localhost". A new interface for handling
external caps is supported with an example implemented for Simian. The
only linden cap supporting this interface right now is the GetTexture
cap.
service access capabilities. In conjunction with the corresponding Simian
updates, this enables explicit per-simulator capability-based access to
grid services. That enables grid owners to add or revoke access to the grid
on a simulator by simulator basis.
Only HG Visitors get this var, to avoid spamming local users.
The config var is now called MapTileURL, to be consistent with the login one, and its being picked up from either [LoginService], [HGWorldMap] or [SimulatorFeatures], just because I have a bad memory.
This reflects the actual use of this stat - it hasn't recorded general exceptions for some time.
Make the sim extra stats collector draw the data from the stats manager rather than maintaing this data itself.
The major departure from flotsam is to send only one message per destination region, as opposed to one message per group member. This reduces messaging considerably in large groups that have clusters of members in certain regions.
This is a test setup failure since code paths when adding a duplicate root scene presence now require the EntityTransferModule to be present.
Test fixed by adding this module to test setup
These were genuine failures caused by ScenePresence.CompleteMovement() waiting for an UpdateAgent from NPC introduction that would never come.
Instead, we do not wait if the agent is an NPC.
- Child and root agents are only closed after 15 sec, maybe
- If the user comes back, they aren't closed, and everything is reused
- On the receiving side, clients and scene presences are reused if they already exist
- Caps are always recreated (this is where I spent most of my time!). It turns out that, because the agents carry the seeds around, the seed gets the same URL, except for the root agent coming back to a far away region, which gets a new seed (because we don't know what was its seed in the departing region, and we can't send it back to the client when the agent returns there).
This is because a returning viewer by teleport before 15 seconds are up will be disrupted by the close.
The 2 second delay is within the scope where a normal viewer would not allow a teleport back anyway.
Simulator/0.2 (V2) protocol will continue with the longer delay since this is actually the behaviour viewers get from the ll grid
and an early close causes other issues (avatar being sent to infinite locations temporarily, etc.)
This prevents an issue if the user teleports back to the neighbour simulator of a source before 15 seconds have elapsed.
This more closely emulates observed linden behaviour, though the timeout there is 50 secs and applies to all the pre-teleport agents.
Currently sticks a DoNotClose flag on ScenePresence though this may be temporary as possibly it could be incorporated into the ETM state machine
In this new protocol, and as committed before, the viewer is not sent EnableSimulator/EstablishChildCommunication for the destination. Instead, it is sent TeleportFinish directly. TeleportFinish, in turn, makes the viewer send a UserCircuitCode packet followed by CompleteMovementIntoRegion packet. These 2 packets tend to occur one after the other almost immediately to the point that when CMIR arrives the client is not even connected yet and that packet is ignored (there might have been some race conditions here before); then the viewer sends CMIR again within 5-8 secs. But the delay between them may be higher in busier regions, which may lead to race conditions.
This commit improves the process so there are are no race conditions at the destination. CompleteMovement (triggered by the viewer) waits until Update has been sent from the origin. Update, in turn, waits until there is a *root* scene presence -- so making sure CompleteMovement has run MakeRoot. In other words, there are two threadlets at the destination, one from the viewer and one from the origin region, waiting for each other to do the right thing. That makes it safe to close the agent at the origin upon return of the Update call without having to wait for callback, because we are absolutely sure that the viewer knows it is in th new region.
Note also that in the V1 protocol, the destination was getting UseCircuitCode from the viewer twice -- once on EstablishAgentCommunication and then again on TeleportFinish. The second UCC was being ignored, but it shows how we were not following the expected steps...