You've got that right for the most part. I'll try to clarify the confusing bits.
1. The records themselves are created automatically. As soon as just one instance of SI Client reports a new application, that application gets a dedicated record on our website. This process is covered on http://software.informer.com/partners.php
2. The statistical data (numbers of users, numbers of downloads, etc.) is always updated automatically, with an occasional human check for possible anomalies.
3. The factual data (category, version number, etc.) is initially gathered from a variety of sources. The SI Client reports the version numbers, the app icon, the developer name, and some other info (you can view what exactly it's submitting by generating an XML report from within the app itself - you can do this e.g. from the Welcome screen upon the first launch). As for the category, home URL, download URL, and other such info - these are hard to come by automatically, especially when there's no PAD file to parse, so we generally rely on user reports until an editor comes up and reviews the application.
4. For applications with a lower number of users, the version info is updated automatically based on specific threshold values. For applications with a higher number of users, the version info is updated manually.
5. It is actually not uncommon for an application to be reviewed several times. New reviews are written pretty regularly, in fact, but those undergo several stages of human checks; so whenever the new reviewer changes the original category, which is by definition a very constant thing, they have to provide solid evidence in favour of their reasoning (e.g. show that most similar applications are filed under that new category).
6. If the information listed on SI is incorrect or outdated (or missing), the proper way to bring that up would be to use the 'Suggest a correction' box, yes. This info is checked by the admins, and is often used for creating new automated rules to prevent the same issues from popping up again.
]]>http://software.informer.com/forum/topic/6078/software-informer-reporting-anomalies/
Thanks Tom!
This is probably all fairly basic stuff ... but as I am new and I try to better understand some of the processes or procedures (human and automated) behind software informer. I guess there are many component which somehow interconnect.
The case I have in mind at this stage ...
Assume there is a fairly new software product and one of the members/editors picks it from somewhere, tests it and writes a review and places the findings online. If the software is being more widely used SI then picks additional information up from the various users' machines (PCs) and then automatically (?) update e.g. version information etc. which is then being displayed online with added information about how many users and downloads etc. (sorry if that is too simplified :-) and please correct me if I am wrong) - any human screening at that stage ? - If everything goes alright information displayed online should be fairly up to date etc. related to version, and other fields. btw, which others are being picked up by SI ? author, ...) etc. Is this correct?
However, what happens if
- the information collected by SI does not match the record or
- the product is not yet being widely used and hence no or little new information is coming in or
- the information online is incorrect (e.g. wrong software category, etc.)
- the software is accidently described again by another member/editor and placed under a different category.
I assume that if somebody become aware of this he/she then has to use the "Suggest a correction" to try to rectify this. Does this go to the user/editor who created the record or will it be pooled centrally to be checked by some of the admins? Or what other procedures do you have to detect and correct?
Cheers
Thomas
]]>Cheers
Thomas