Archived messages from: gitter.im/red/tests from year: 2017

greggirwin
17:57I'm adding a suite for percent tests, derived from %float-test.red. Since there are a lot of magic numbers in there, should we expect results to be the same (accounting for scale), or are there different limits and special cases we need to put in place?
17:59As an example, 1.#INF comparisons fail, but match in the console. e.g.
>> strict-equal? 1.#INF 1.7976931348623157e+308% - -1.7976931348623157e+308%
== false
>> equal? 1.#INF 1.7976931348623157e+308% - -1.7976931348623157e+308%
== false
>> equal? 1.#INF 1.7976931348623157e+310% - -1.7976931348623157e+310%
== true
>> strict-equal? 1.#INF 1.7976931348623157e+310% - -1.7976931348623157e+310%
== false
>> 1.7976931348623157e+310% - -1.7976931348623157e+310%
== 1.#INF
>> 1.7976931348623157e+308% - -1.7976931348623157e+308%
== 1.#INF

E+308 was in float tests.

Or should we not tie tests to percent being a float in its current incarnation?
18:08If I know what to do with 1.7976931348623157e+308 and 2.2250738585072014e-308, and if that solves the INF issues, the number of failed tests should drop dramatically.
18:135.562684646268003e-309 is another one.

PeterWAWood
00:38@greggirwin I think it would be better not to duplicate the float tests as percent tests. The float tests will test the underlying implementation. I think it is only necessary to test the scaling by a 100 and a single test of each arithmetic operator. I believe it is safe to assume that arithmetic operators will perform correctly in all cases if one test succeeds.
00:40@dockimbel @greggirwin Personally I feel that scientific notation should not be allowed for
percent!
values. It seems unnatural, I've never seen a percent written using scientific notation.
dockimbel
08:02@PeterWAWood Agreed, but what's the alternative?
PeterWAWood
08:38In everyday use percentages have a very limited value range and precision. I believe they are used as a presentation aid rather than to express numbers accurately. Most typically, the maximum percents used are hundreds. Two decimal places is common, three used on occasion, percents with four decimal places are real rarity.
08:42I know that will mean a little extra processing but you could consider it the price that the programmer pays for not having to multiply by 100. 😀
greggirwin
16:15I agree that percents are usually a much more limited range, and also never use E notation. Money values have a much larger normal range, but should also never use E notation when formed (when that time comes). Percents can still support a fairly wide range, and artificial limitations lifted if a need is shown. I'm OK with the current implementation for some time though.
16:17A mentor of mine (Tom at iBEAM, Peter) would often say "Can we fix it with language?" when talking about a product feature. I think we can here, for now. We just say percents are valid from M to N (limits), and will be auto rounded to 4 decimal places (currently solves the E issue in forming). Values outside the supported range are undefined in their behavior (or we throw an error when trying to create them).
16:19The only downside I can think of is if you want to press them into service in a dialect, where you want full float/decimal range support, but differentiated. I'm OK with that constraint as well though. Constraints are powerful things, and very helpful.

dockimbel
07:02Two remarks on that topic:
* Adding constraints when not strictly necessary means more code branches, more tests, and more rules to remember for the users (us including).
* As @greggirwin pointed out, we often allow broader usage for many datatypes than their original intended purpose, in order to cover more grounds for dialects (though, for the percent specific case, it's probably harder to repurpose it in a dialect, as it's a divide by 100 number).
07:03Though, excluding scientific notation for percents at lexer level would be quite simple, but for serialization, I don't see how we can restrict is without restricting the percent range itself.
greggirwin
08:06I'm not saying we should put any constraints in the code, only in the user's mind.

greggirwin
14:57Adding tests for new even?/odd? on time values (PR fixed), we have tb4-t: -1:-0:-0 in the tests, but that seems to be invalid syntax now. Invalid integer thrown. Should we remove that, or is it a regression?
18:56I also don't know if this is a regression:
>> tb3-t: 2147483645:59:59
== 2147483645:59:58.9997406
18:56A number of Rudolph's time tests are failing as well.
19:04Looks like I need to fix my even?/odd? PR for negative times too.
20:03Thought I'd just use round, but trying that real quick I'm missing something fundamental in R/S:
even?: func [
		tm		[red-time!]
		return: [logic!]
		/local
			t [float!]
	][
		tm: round tm 1.0 no yes no no no no		; yes is down? slot
		t: tm/time
		not as-logic (as integer! GET_SECONDS(t)) and 1
	]

produces:
*** Compilation Error: argument type mismatch on calling: red/time/round
*** expected: [struct! [
        header [integer!]
        padding [integer!]
        value [float!]
    ]], found: [float!]

meijeru
06:26The second argument to round should be a red-float! not a float!
06:27so you have to pack 1.0 into a struct!
greggirwin
16:51Ahhh, just couldn't see it. Thanks Rudolf. I got it stuck in my head that it was my time struct.
16:51Now the question is whether that's the better way to do it.

geekyi
08:04Ah, I get it, semantics. When in Rome.. ah Red/System use Red/System data structures? Unless it requires (un)boxing or other conversions

mahengyang
06:56 @greggirwin I made this issue last week #3034 , it’s strange that when run power-test.red
$ red interp-power-test.red

output:
~~~started test~~~ interp-power
~~~finished test~~~  interp-power
  Number of Tests Performed:      10
  Number of Assertions Performed: 12
  Number of Assertions Passed:    12
  Number of Assertions Failed:    0


but when run rebol -qws run-all.r —batch, two error in quik-test.log
===group=== power error
 --test-- power-error-2 FAILED**************
 --test-- power-error-2 FAILED**************

PeterWAWood
07:57@mahengyang See #3039 and take a look at the comments on #3034.
mahengyang
07:59OK, I know, So I need wait #3039 fixed, then make a new PR?

PeterWAWood
07:34@mahengyang #3039 is fixed :-)
07:35It would be great if you could make a new PR.
mahengyang
09:50@PeterWAWood I saw a card named Write tests for functions in environment/function.red in trello test board, there has no test code for environment/function.red, I plan to do this, is that ok?
PeterWAWood
11:09@mahengyang I think that would be good. There is a chance that environment/function.red could be moved or changed when Red 0.8.0 is released, but that is some way off.

The file should be red/tests/source/environment/function-test.red.

PeterWAWood
01:54@mahengyang If you can, would you mind rebasing #3034?
mahengyang

maximvl
11:27something strange here:
--test-- "series-find-76"  
		hs-fd-1: make hash! [2 3 5 test #"A" a/b 5 "tesT"]
		append hs-fd-1 datatype!
		--assert 3 = index? find hs-fd-1 5
11:27some cryptic append which doesn't affect anything

dockimbel
16:11@maximvl Indeed. You can use git blame to retrieve the author of this test.

mahengyang
08:47I’m write test for functions.red, how to write test for quit func
quit: func [
	"Stops evaluation and exits the program"
	/return status	[integer!] "Return an exit status"
][
	#if config/OS <> 'Windows [
		if system/console [system/console/terminate]
	]
	quit-return any [status 0]
]

and ?? func
??: func [
	"Prints a word and the value it refers to (molded)"
	'value [word! path!]
][
	prin mold :value
	prin ": "
	print either value? :value [mold get/any :value]["unset!"]
]
08:48seems need to get stander output
PeterWAWood
10:11@mahengyang To test the quit you need to "stub out" quit-return. Something like this:
--test "quit-1"
    save-quit-return: :quit-return
    quit-return: func [/return status][any [status 0]
    --assert 0 = quit
    quit-return: :save-quit-return


It doesn't test quit-return but it is better than no tests.
mahengyang
10:13ok, I try call/output “red [] quit 2” out , seems no use
PeterWAWood
10:15You can use a similar technique to check standard out though it is a little more complicated:
--test "??-1"
    save-print: \:print
    save-prin: \:prin 
    ??output: copy ""
    print: function[val][append ??output reduce value]
    ??-1-a: 1
    ?? ??-1-a
    --assert none <> find ??output "??-1-a: 1"
   print: \:save-print
   prin: \:save-prin
dockimbel
10:18@PeterWAWood Typing on a phone? :-)
PeterWAWood
10:23No just using some not so intelligent HTML5 editor that converts : p r i n t into :p

It makes you wonder about the people who write such code. It's the same with VS Code. It continuously requests to be updated because the author didn't think that people would run it on a non-admin account. It's quite pathetic.

I don't know how people come to the conclusion that native apps are dead.
10:26@mahengyang It is possible to check sysout if you write the tests in Rebol. Once we are able to remove the Rebol dependancy in the tests, I should be able to provide so better features to interrogate sysout.

mahengyang
06:05@PeterWAWood but my computer is mac, this code
#if config/OS <> 'Windows [
        if system/console [system/console/terminate]
    ]

makes effect
PeterWAWood
08:35@mahengyang Don't worry about that code. Writing a test of quit will be difficult at this stage.

We can come up with some tests for call later.
mahengyang
08:36OK, I skip it write other func test

mahengyang
07:57context: func [spec [block!]][make object! spec]
07:57how to test this func?
07:58where is it usage?
PeterWAWood
08:09It is simply shorthand for make object! so o: context [a: 1 b: 2] is the same as o: make object! [a: 1 b: 2].
mahengyang
PeterWAWood
08:12All that is needed is one test to check that context correctly makes an object. Something like this:
--test-- "context-1"
    c1-c: context [a: 1 b: "345" f: function []["Okay"]]
    --assert c1-c/a = 1
    --assert c1-c/b = "345"
    --assert "Okay" = c1-c/f
    --assert Object! = type? c1-c
mahengyang
08:13thks

mahengyang
09:06@PeterWAWood tests for envrionment/functions.red has already completed, but I do not know how to run it, I add ../environment/functions-test.red into tests/source/units/all-tests.txt, then run command $ rebol -s run-all.r, got some error:
** Access Error: Cannot open /Users/ma/puffin/red.git/tests/source/units/auto-tests/interp-../environment/functions-test.red
** Where: write-test-header
** Near: write file-out tests

there is no file names tests/source/units/auto-tests/functions-test.red,
09:10I have read run-all.r, found real action in tests/source/units/run-all-init.r
;; make auto files if needed
do %make-red-auto-tests.r
do %make-interpreter-auto-tests.r

;; build run-all-comp.red and run-all-interp.red
do %make-run-all-red.r

;; build the each test runners
do %make-run-each-runner.r

this file:make-interpreter-auto-tests.r, seems generate lots of files under the directory auto-tests
PeterWAWood
10:08make-interpreter-auto-tests.r can currently only handle files in `tests/source/units' or sub-directory of it. I will need to take a look at it to work out how to handle files in other dirs.
10:09I don't have time to do that at the moment. I will try to make time on Monday or Tuesday next week.

mahengyang
03:37
...using libRedRT built on 13-Oct-2017/16:05:50+8:00
*** Compilation Error: a routine must have a name
*** near: [routine [1] [2]]

test code:
rt-1: try [routine [1] [2]]
	--assert error? rt-1

I got this error when do unit test, any body knows how to use routine
03:39source code for routine
routine: func [spec [block!] body [block!]][
	cause-error 'internal 'routines []
]

seems just throw an error?
03:41@PeterWAWood I had solved the problem about ../environment/functions-test.redin all-tests.txt
03:43by alter make-interpreter-auto-tests.r and make-run-all-red.r, they generate so many files in auto-tests and auto-tests/run-all folders, a little complex
09:09routine already had a test named routine-test.red, so I just delete routine test in my functions-test.red
greggirwin
17:31What happens if you do it this way?
try [rt-1: routine [1] [2]]
17:38Try seems to be the issue, though your spec block isn't valid either. If you put the try around it (try [rt-1: routine [a] [a]]), you get:
...using libRedRT built on 16-Oct-2017/11:33:11-6:00

*** Compilation Error: declaring a function at this level is not allowed 
*** near: [
    rt-1: func [a] [a] 
    stack/mark-native ~set 
    word/push ~rt-1
]
17:39I'll let @PeterWAWood comment on how best to catch compiler errors in the test suite, since try will hide the actual error here.

mahengyang
08:49
===group=== fifth tests
--test-- fifth-3 FAILED**************
~~~finished test~~~  run-all-interp
  Number of Tests Performed:      5531
  Number of Assertions Performed: 9633
  Number of Assertions Passed:    9632
  Number of Assertions Failed:    1
****************TEST FAILURES****************

but the test code is very simple --test-- "fifth-3" --assert 5 = fifth 1.2.3.4.5
08:55by the way, what is the difference between run-all-comp1 and run-all-interp?
the fifth-3 test successed in run-all-comp1, but failed when run run-all-interp
PeterWAWood
23:39Sorry I am very busy at the moment and don't have much spare time.
23:45"what is the difference between run-all-comp1 and run-all-interp?"

run-all-comp1 and run-all-comp2 contain all the compiled tests.
run-all-interp contains all the tests compiled but to run using the interpreter rather than compiled code.

The purpose is to test both the compiler and the interpreter.

Some tests will not run under the compiler, some will not run under the interpreter. (These tests need to be identified and protected either by using the pre-processor or checking at runtime whether the code is being interpreted or not.)
23:52--test-- "fifth-3" --assert 5 = fifth 1.2.3.4.5 - I checked against the latest master and this should work. The most likely reason for this failing is that fifth has been set in another test.
23:56The current compiler (written in Rebol) stops when it encounters an error. This means we would need to have a separate compilation for every compiler error. If we did that the tests would take far too long to run. As a result, we don't have any tests that the compiler correctly reports errors.
23:58I still need to look at running tests from the tests/source/environment directory. In addition to making them work in run-all.r, we need them to work in build-arm-tests.r.

PeterWAWood
00:02The current structure of the tests and the scripts to run them is very messy. This is mainly due to the way they have evolved over time. Thankfully quick-test.red and quick-test.reds seem to work very well. We need to completely overhaul "test running" though.
00:04We will write the new test runners in Red rather than Rebol. I hope that we can a new test runner can be included in the 0.7.0 release.