This alarm can then invoke this to log extra information.
This is used in LLUDPServer to show which client was being processed when incoming and outgoing udp watchdog alarms are triggered.
The packet was actually being handled but not acted on.
This change extends the default timeout for paused clients to 5 minutes
and makes both the paused and non-paused timeout periods configurable.
1) The return messages were being wrongly populated with the names of asset, inventory and sale types when their corresponding integers should have been used instead.
2) Folders with links were including the linked items in the descendents figure, when only the links should be included.
3) Links and linked items in link folders were not being included in the return data, and not in the correct order.
Now that these issues have been addressed, outfits and attachments appear to work consistently when HTTP inventory is enabled (as is now the default).
These will act as a sanity check with the main scene stats, to show that physics scene entities are being managed properly.
Total prims will not match scene total prims since physics total does not include phantom prims
Also zeros collisions scores on all prims after report collection, not just the top 25.
As before, this collision scores are only reset after a report is requested, which may give unrealistic numbers on the first request.
So to see more realistic scores, ignore the first report and then refresh the request after a couple of seconds or so.
If active, the physics module can return arbitrary stat counters that can be seen via the MonitoringModule
(http://opensimulator.org/wiki/Monitoring_Module)
This is only active in OdeScene if collect_stats = true in [ODEPhysicsSettings].
This patch allows OdeScene to collect elapsed time information for calls to the ODE native collision methods to assess what proportion of time this takes compared to total physics processing.
This data is returned as ODENativeCollisionFrameMS in the monitoring module, updated every 3 seconds.
The performance effect of collecting stats is probably extremely minor, dwarfed by the rest of the physics code.
Also fixes the log warning from ResetInTransit() if the state is cleared direct from Transferring or ReceiveAtDestination, as pointed out in mantis 5426
If this is done before then on ODE agent update calls still incoming can fail as they try to use a raycastmanager that has been disposed.
Bullet plugin does nothing on Dispose()
However, I wouldn't be at all surprised if individual region restarting was buggy in lots of other areas.
If this is allowed, then the client usually gets forcibly logged out and data structures might be put into bad states.
To prevent this, the binary state machine of EMT.m_agentsInTransit is replaced with a 4 state machine (Preparing, Transferring, ReceivedAtDestination, CleaningUp).
This is necessary because the source region needs to know when the destination region has received the user but a teleport back cannot happen until the source region has cleaned up.
Tested on standalone, grid and with v1 and v3 clients.
This was due to two things
1) SimulationServiceConnector.QueryAccess was always looking to the outer result["success"].
But if a "_Result" map is returned (which is certainly the case right now), then the true success is _Result["success"], result["success"] is always true no matter what
2) If QueryAccess was false at the destination, then AgentHandlers.DoQueryAccess() was never putting this in the result.
The default action of SerializeJsonString() is not to put false booleans in the JSON!!!, so this has to be explicitly set.
This is to help relieve a race condition when an agent teleports then immediately attempts to teleport back before the source region has properly cleaned up/demoted the old ScenePresence.
This is rare in viewers but much more possible via scripting or region module.
However, more needs to be done since virtually all clean up happens after the transit flag is cleared .
Possibly need to add a 'cleaning up' state to in transit.
This change required making the EntityTransferModule and HGEntityTransferModule per-region rather than shared, in order to allow separate transit lists.
Changes were also required in LocalSimulationConnector.
Tested in standalone, grid and with local and remote region crossings with attachments.
Since this is done directly from ScenePresence, it can lead to a race condition with the simulator loop.
There's no real point doing it anyway since the clear will be done very shortly afterwards by the simulate loop and either there are no events (for a new avatar) or events don't matter (for a departing avatar).
This matches existing behaviour in OdePrim
From sl docs such as http://community.secondlife.com/t5/English-Knowledge-Base/Managing-Private-Regions/ta-p/700115
agent should apply to avatars only.
This makes sense from a user perspective, and also from a code perspective since child agents with no physics or actions take up a fraction of root agent resources.
As such, the check is now only performed in Scene.QueryAccess() - cross and teleport check this before allowing an agent to translocate.
This also removes an off-by-one error that could occur in certain circumstances on teleport when a new child agent was double counted when a pre-teleport agent update was performed.
This does not affect an existing bug where limits or other QueryAccess() checks are not applied to avatars logging directly into a region.