Intended for use if there are future issues with mono crashes whilst generate dynamic textures.
This test is triggered via a new test-stress nant target.
Not run by default.
Disabled (status quo) by default.
This flag makes the dynamic texture module reuse cache previously dynamically generated textures given the same input commands and extra params for 24 hours.
This occurs as long as those commands would always generate the same texture (e.g. they do not contain commands to fetch data from the web).
This makes texture changing faster as a viewer-cached texture uuid is sent and may reduce simulator load in regions with generation of lots of dynamic textures.
A downside is that this stops expiry of old temporary dynamic textures from the cache,
Another downside is that a jpeg2000 generation that partially failed is currently not regenerated until restart or after 24 hours.
In testing, it appears that if multiple threads dispose of separate GDI+ objects simultaneously,
the native malloc heap can become corrupted, possibly due to a double free(). This may be due to
bugs in the underlying libcairo used by mono's libgdiplus.dll on Linux/OSX. These problems were
seen with both libcario 1.10.2-6.1ubuntu3 and 1.8.10-2ubuntu1. They go away if disposal is perfomed
under lock.
This is to resolve a reported issue in http://opensimulator.org/mantis/view.php?id=6232
Here, the land management module is using OpenSim.Framework.Cache (the only code to currently do so apart from the non-default CoreAssetCache).
This is to allow a second attempt to remove an avatar even if "show connections" shows them as already inactive (i.e. close has already been attempted once).
You should only attempt --force if a normal kick fails.
This is partly for diagnostics as we have seen some connections occasionally remain on lbsa plaza even if they are registered as inactive.
This is not a permanent solution and may not work anyway - the ultimate solution is to stop this problem from happening in the first place.
The convention is that if an object implements IDiposable() the code must explicitly call Dispose() or call it via the using statement.
This may be particularly important for GDI+ objects since they encapsulate native code entities.
A deadlock was observed today where NPC removal on a script thread would lock the NPC list and then try to lock the sensor list via scripted attachment removal.
Concurrently, the sensor sweep thread would lock the sensor list and then try to lock the NPC list to check NPC status.
This commit resolves the deadlock by replacing the sensor list on update rather than locking it for the duration of the sweep.
Possibly this was for a feature that was never implemented or was otherwise removed.
Thanks to SignpostMarv for the spot of the warning that shows this parameter was never changed.
If we copy the asset description then we will only ever replicate the very first description, if there was one, not any subsequent changes.
Thanks to Oren Hurvitz of Kitely for this patch from http://opensimulator.org/mantis/view.php?id=6107
I have adapted it slightly to change the order of arguments (name before description rather than vice-versa) and slightly improve some method doc.
This aims to capture the amount of memory that OpenSim turns over whilst operating a region.
This memory is not lost - apart from leaks it is reclaimed by the garbage collector.
However, the more memory that gets turned over the more work the GC has to do to reclaim it.
This is fired when all regions are ready or when at least one region becomes not ready.
Recently added EventManager.OnRegionReady becomes OnRegionReadyStatusChange to match OnLoginsEnabledStatusChange
When linking, detach the no longer used SOG's from backup so they can be
collected. Since their Children collection is never emptied, they prevent
their former SOPs from being collected as well.
This replaces EventManager.OnLoginsEnabled which only fired when logins were first enabled
and was affected by a bug where it would never fire if the region started with logins disabled.
This saves listeners from having to re-retrieve the scene from their own lists, which won't work anyway if multiple regions with the same name have been allowed
Giving a large folder from one avatar to another was causing a long delay when handled synchronously, since it took some time to retrieve the necessary data from the inventory service.
Handling this asynchronously instead stops this delay from disrupting all avatars in the scene. This has been shown in OSGrid.
I see no reason for not handling all IM messages asynchronously, just as incoming chat is handled asynchronously, so this has been switched for all instant messages.
Thanks to Nebadon for testing this change out.