The problem: Calls need to return adequate information to distinguish:
The solution: use a bounded range of negative values as "failure" codes, leaving the non-negative return values available as "success" codes. The canonical success code is 0, but a routine that needs to return a value (string length, ENT pointer) can do so and have that value interpreted as a "success" code as well.
There are "degrees" of failure, and the negative codes are partitioned according to increasingly severity, as follows:
|-1||notpres||successful execution, no data present or no change made|
|-2||terminated||failure, no damage, caller can retry operation|
|-10||retryerr||failure, no damage, caller can retry operation|
|-13||keyerr||failure, no damage, call was in error|
|-15||argerr||failure, no damage, call was in error|
|-20||noroom||failure, no damage, out of room in file|
|-30||typerr||failure, file or object was not of correct type|
|-40||ioerr||i/o error, DB may be damaged|
|-45||strangerr||internal error, DB may be damaged|
The first class represent operations that completed without error. The second class represent operations that failed to complete, but are guaranteed to leave the DB in a correct state and are retry-able (or easily correctable). The third class represent operations that failed to complete, did not damage the database, but are not easily fixable or restartable. The last class represent error conditions in which the DB was corrupted, or during which DB corruption was detected.
The predicate (ERR? code) returns #t if the return code is within the range NOTPRES-MAXERR; the predicate (REALERR? code) returns #t if CODE is an actual error, as opposed to a "not there" or "stop processing" message.