console.exe I have been able to see the output errors.probe system/schemes/odbc/state/drivers
probe system/schemes/odbc/state/sources
console.exe compiled in ---debug mode.connection: open make port! [
scheme: 'odbc
target: "driver={SQL Server};server=192.168.x.x;port=1433;database=ADB_RD32;uid=********;pwd=********;AutoTranslate=yes"
]connection: open make port! [scheme: 'odbc target: "dsn=RD32;AutoTranslate=yes"]
connection: open odbc://RD32 ;-- with AutoTranslate=Yes configured in admin panel
string!s is bound to fail.CharacterSet=Unicode what solved it, but these connection string params differ from database to database and/or from driver to driver. For SQL Server there is no option with this name and "AutoTranslate" being the closest fit.connection: open make port! [
scheme: 'odbc
target: "driver={SQL Server};server=192.168.x.x;port=1433;database=ADB_RD32;uid=********;pwd=********;AutoTranslate=yes"
][
c [connection: open make port! [scheme: 'odbc target: "dsn=RD32;AutoTranslate=yes"]]
d [connection: open odbc://RD32] ;-- with AutoTranslate=Yes configured in admin panel
]
>> switch 'c con
28000 18456 [Microsoft][ODBC SQL Server Driver][SQL Server]Accesso non riuscito
per l'utente ''.
*** Script Error: ODBC error: ["28000" 18456 {[Microsoft][ODBC SQL Server Driver
][SQL Server]Accesso non riusc
*** Where: ctx||440~open-connection
*** Near : "dsn=RD32;AutoTranslate=yes"
*** Stack:
>> switch 'd con
28000 18456 [Microsoft][ODBC SQL Server Driver][SQL Server]Accesso non riuscito
per l'utente ''.
*** Script Error: ODBC error: ["28000" 18456 {[Microsoft][ODBC SQL Server Driver
][SQL Server]Accesso non riusc
*** Where: ctx||440~open-connection
*** Near : entity/spec/host
*** Stack:actor/open: connection
init-odbc
OPEN-ENVIRONMENT [
SQLAllocHandle 0
SQLSetEnvAttr 0
]
OPEN-CONNECTION [
SQLAllocHandle 0
SQLDriverConnect 65535DIAGNOSE-ERROR [
allocate state, 12 bytes @ 03C161E8
allocate message, 4096 bytes @ 03C18020
SQLGetDiagRec 0
08001
08001 17 [Microsoft][ODBC SQL Server Driver][DBNETLIB]Server SQL inesistente o a
ccesso negato.
SQLGetDiagRec 0
01000
01000 2 [Microsoft][ODBC SQL Server Driver][DBNETLIB]ConnectionOpen (Connect()).
SQLGetDiagRec 100
free state @ 03C161E8
free message @ 03C18020
]
*** Script Error: ODBC error: ["08001" 17 {[Microsoft][ODBC SQL Server Driver][D
BNETLIB]Server SQL inesistente
*** Where: select-key*
*** Near : {driver={SQL Server};server=192.168.0.5;d}
*** Stack:d as last char?*** Near only cites so much from your code. select-key* isn't a function belonging to the scheme.DSNames RD32 and RD32B. You sure that these can connect when you connection-test them in the 32bit odbc admin panell?>> switch 'b con
actor/open: connection
init-odbc
OPEN-ENVIRONMENT [
SQLAllocHandle 0
SQLSetEnvAttr 0
]
OPEN-CONNECTION [
SQLAllocHandle 0
SQLDriverConnect 1
DIAGNOSE-ERROR [
allocate state, 12 bytes @ 03206270
allocate message, 4096 bytes @ 0320A110
SQLGetDiagRec 0
01000
01000 5701 [Microsoft][ODBC SQL Server Driver][SQL Server]Il contesto di databas
e è stato sostituito con 'ADB_RD'.
SQLGetDiagRec 0
01000
01000 5703 [Microsoft][ODBC SQL Server Driver][SQL Server]L'impostazione della l
ingua è stata sostituita con Italiano.
SQLGetDiagRec 100
free state @ 03206270
free message @ 0320A110
]
]
about-connection
*** Access Error: invalid UTF-8 encoding: #{C0C1C2C3}
*** Where: parse
*** Near : info
*** Stack: eval-set-pathport is not recognized as attribute.invalid UTF-8 encoding: #{C0C1C2C3}switch 'b con in a --debug CLI console (because the GUI won't provide debug information printed on R/S level).switch 'b con connection method looking like? You posted only the code for 'c and 'd.b is the following:b-ok [
connection: open make port! [
scheme: 'odbc
target: "driver={SQL Server};server=DIBI2012\SQL2014;database=ADB_RAVIOL_DORO;uid=*********;pwd=********;AutoTranslate=yes"
]
]>> to-char to-integer #{c0}
== #"À"string/load from UTF-16LE encoded buffer. Thanks anyway!--debug and try some queries. Typewriter mode is still there, so I can't use the GUI.## Inserting Statements, Retrieving Results
The following example should give you an (informal) idea on how SQL statements are
executed.
You execute a SQL statement with `insert`:
----
insert statement "INSERT INTO Languages (Name) VALUES ('Red')"
== 1
---probe statement/state/columns.copy, I receive: "01004 0 [Microsoft][ODBC SQL Server Driver]Troncamento a destra della stringa di dati."
select cast(NotesXML as varchar(128000)) from ... or whatever syntax SQL Server requires.> stmt: open conn: open odbc://whatever > insert stmt "select cast(NoteXML as varchar(2000)) limit 3" > probe copy stmt
select cast(notexml as varchar(8000)) as n from dotes where notexml is not nullas n not needed, forgot to delete it)do %script.red cn: switch 'b-ok con st: open cn st/state/window: 100000 insert st "select cast(notexml as varchar(8000)) as n from dotes where notexml is not null" copy st
cast(notexml as varchar(8000)) case, too.error! in such a column, that could proof as a viable solution for other cases where no conversion is possible. But it wouldn't help you anyway, because given your NotesXML column, a buffer of size statement/state/window * (2^31-1) would have to be allocated in advance anyway if you don't limit the column size to something sufficient and more reasonable. The buffer would have to be allocated to that size before starting fetching rows into. Because before fetching the actual column values there is no way to know it's actual size in a particular row. Hence the above "it won't fit into the AP" well. What would be required here would be a way to retrieve a single BLOB without allocating a row buffer for that column. Far from impossible, but no priority and no plans for that.>> do/args %red.r "-r --debug %environment/console/CLI/console.red"
Script: "Red command-line front-end" (none)
Script: "Encap virtual filesystem" (21-Sep-2009)
-=== Red Compiler 0.6.4 ===-
Compiling C:\xxxxxxxxxxxxxxxxxx\Red___ODBC\red-odbc\environment\console\CLI\console.red ...
...compilation time : 5275 ms
Target: MSDOS
Compiling to native code...
*** Compilation Error: unknown identifier in literal array: native
*** in file: %/C/xxxxxxxxxxxxxxxxxx/Red___ODBC/red-odbc/environment/schemes/odbc.reds
*** in function: exec/odbc/diagnose-error
*** at line: 2927
*** near: [[" " native " " lf]
print-wstring as c-string! message red/platform/prin* [lf]
]
>>port=1433 seems to not give error anymorec-DNS1 [connection: open make port! [scheme: 'odbc target: "dsn=RD32;AutoTranslate=yes"]]
c-DNS1 [connection: open make port! [scheme: 'odbc target: "dsn=RD32"]]
target: "DSN=" form is largely untested. I guessed it should work already, but I haven't really tried this. If you like, open a ticket for this so the info doesn't get lost.cn: switch 'b-ok con st: open cn st/state/window: 100000 insert st rejoin ["select cd_cf , descrizione from cf"] a: copy st cn: switch 'b-ok-notranslate con st: open cn st/state/window: 100000 insert st rejoin ["select cd_cf , descrizione from cf"] b: copy st probe exclude a b
>> do %odbc-test.red [] == []
autotranslate=noprobe st/state/columns might give some clues on the SQL types involved.[
cd-cf "cd_cf" 1 7 0 0 handle! 16 handle!
descrizione "descrizione" 12 80 0 0 handle! 162 handle!
]>> do %odbc-test.red *** Internal Error: not enough memory *** Where: a *** Near : a: copy st halt cn: switch 'b-ok-notranslate *** Stack: do-file
st/state/columnstop 1000 (same as LIMIT 1000) I have the not enough memory errorTOP 100000 (one hundred, 100 times more) it works. TOP 100000 and st/state/window: 1500000 the consolle crashes while running the query, and this EACH TIME.ON CONFLICT ... DO UPDATE how I could done it? Simply add after prepared-values? Like:insert snowwhite [
"INSERT INTO Dwarves (Num, Name) VALUES (?, ?)"
[1 "Dopey"] [2 "Doc"] [3 "Bashful"] [4 "Sneezy"] [5 "Happy"] [6 "Grumpy"] [7 "Sleepy"]
"ON CONFLICT ..... "
] Script Error: expected block! not string!insert snowwhite ["INSERT INTO Dwarves (Num, Name) VALUES (?, ?) ON CONFLICT ..." [1 "Dopey"] [2 "Doc"]]
st/state/window: 1'000'000 memory wise is about the most inefficient way you can come up with when rerieving a million rows: This requires the scheme to allocate a huge buffer of 1'000'000 * total-row-size and additionally Red has to allocate a block of length 1'000'000 * num-of-columns and 1'000'000 * num-of-columns Red values in this block.1'000'000 * num-of-columns and 1'000'000 * num-of-columns Red values in this block.next is for, that way you can "stream" rows, handle the first row set, process that, then next to the next row set and so on a thousend rows a time ... See https://github.com/gurzgri/red/blob/odbc/environment/schemes/odbc.md#retrieving-rowsloop 3 [open connection open statement execute statement copy results close connection]open connection open statement loop 3 [execute statement copy results] close connection?none!, i.e. as #[none] or the composed/reduced word none.Total-Row-Size is bytes, length (num)1'000'000 * num-of-columns and 1'000'000 * num-of-columns Red values in this block do %odbc-test.red of the following script never closing the consolecon: [
b-ok [
connection: open make port! [
scheme: 'odbc
target: "driver={SQL Server};server=DIBI2012\SQL2014;database=ADB_RAVIOL_DORO;uid=*********;pwd=********;AutoTranslate=yes"
]
]
]
sql-cols: {cols-list} ; (over 100, separated with ",")
cn: switch 'b-ok con
st: open cn
st/state/window: 1500000
insert st rejoin ["select top 100000" sql-cols " from dotes"] data: copy st
;probe datasql-cols is a string with >100 cols st. You open a new statement in a new connection, losing access to the previous statement and the previous connection. Hence, these never get closed and can't release housekeeping low level buffers. You can close all statements attached to a connection by just closing the connection.WORD. Why don't you map it to string as there are some DB engines which accept column names like 2WORK which not legal in Red?>> column-of: func [statement word] [copy select statement/state/columns] () >> column-of statement 'user-id == "UserID"
statement/state/columnsAS "..." such column names so they adhere to what can be loaded as word.AS "..." such column names so they adhere to what can be loaded as word.extract/index all strings from stmt/state/columns in what‘s worth half a line of code.make make object! [proto] [values] where [PROTO] initializes the columns of the ROW and [values] sets all or part of them. Having strings as column name is not possible for objects, so I have observed to myself, I need to rename the "illegal" words to a legal ones and keep a map of [orginal-name remapped-name]. That's all, you don have to do anything, your code is pefect.none and it's inserting as 1, not as null#[none]>> insert schools "select * from public.schools" == [authority-id address-id caption city name school-id authority-key] >> extract/index schools/state/columns 9 2 == ["authority_id" "address_id" "caption" "city" "name" "school_id" "authority_key"]
>> proto-of: func [query] [make map! collect [foreach col extract/index query/state/columns 9 2 [keep reduce [col none]]]] ()
>> populate: func [row values] [foreach column keys-of row [row/:column: first values values: next values] row] ()
>> row: populate proto: proto-of schools first rows: copy schools
== == #(
"authority_id" 12
"address_id" 377534
"caption" "HS ******-****"
"city" "H*****heim"
"name" none
"school_id" 25815
"author...NULL values with values of type none! - might be bug, but it may depend. PostgreSQL? What does probe data-insert-statement/state/columns look like? The 6th value out of 9 per column is the nullability of the column. Eventually your currency_code column allows no NULL values? A 0 (zero) means "no NULLs allowed". insert stmt ["some sql" [...] [...] [...] ...] yesterday, it's the same today. You again have a block too much around your param values. #[none] is inserted as 1 whether column is nullable or not. I'll address that.wrong syntax for timestamp with time zone: "1";^/none above?insert data-insert-statement compose/deep [
{INSERT INTO notifications ("purchaseNumber", "isGOZ", "docPublishDate", "responsibleOrg_inn", "responsibleOrg_regNum", "responsibleOrg_kpp", "responsibleOrg_fullName", "responsibleOrg_postAddress", "responsibleOrg_factAddress", "contactPerson_lastName", "contactPerson_firstName", "contactPerson_middleName", "responsibleInfo_contactEMail", "responsibleInfo_contactPhone", "specializedOrg_fullName", "specializedOrg_postAddress", "specializedOrg_factAddress", "specializedOrg_INN", "specializedOrg_regNum", "responsibleRole", "placingWay_code", "placingWay_name", "etp_code", "etp_name", "etp_url", "procedureInfo_collecting_startDate", "procedureInfo_collecting_place", "procedureInfo_collecting_endDate", "procedureInfo_scoring_date", "procedureInfo_bidding_date", "tag", "region", "xml_path", "fz_name") VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) ON CONFLICT ("purchaseNumber")
DO UPDATE SET "purchaseNumber" = EXCLUDED."purchaseNumber", "isGOZ" = EXCLUDED."isGOZ", "docPublishDate" = EXCLUDED."docPublishDate", "responsibleOrg_inn" = EXCLUDED."responsibleOrg_inn", "responsibleOrg_regNum" = EXCLUDED."responsibleOrg_regNum", "responsibleOrg_kpp" = EXCLUDED."responsibleOrg_kpp", "responsibleOrg_fullName" = EXCLUDED."responsibleOrg_fullName", "responsibleOrg_postAddress" = EXCLUDED."responsibleOrg_postAddress", "responsibleOrg_factAddress" = EXCLUDED."responsibleOrg_factAddress", "contactPerson_lastName" = EXCLUDED."contactPerson_lastName", "contactPerson_firstName" = EXCLUDED."contactPerson_firstName", "contactPerson_middleName" = EXCLUDED."contactPerson_middleName", "responsibleInfo_contactEMail" = EXCLUDED."responsibleInfo_contactEMail", "responsibleInfo_contactPhone" = EXCLUDED."responsibleInfo_contactPhone", "specializedOrg_fullName" = EXCLUDED."specializedOrg_fullName", "specializedOrg_postAddress" = EXCLUDED."specializedOrg_postAddress", "specializedOrg_factAddress" = EXCLUDED."specializedOrg_factAddress", "specializedOrg_INN" = EXCLUDED."specializedOrg_INN", "specializedOrg_regNum" = EXCLUDED."specializedOrg_regNum", "responsibleRole" = EXCLUDED."responsibleRole", "placingWay_code" = EXCLUDED."placingWay_code", "placingWay_name" = EXCLUDED."placingWay_name", "etp_code" = EXCLUDED."etp_code", "etp_name" = EXCLUDED."etp_name", "etp_url" = EXCLUDED."etp_url", "procedureInfo_collecting_startDate" = EXCLUDED."procedureInfo_collecting_startDate", "procedureInfo_collecting_place" = EXCLUDED."procedureInfo_collecting_place", "procedureInfo_collecting_endDate" = EXCLUDED."procedureInfo_collecting_endDate", "procedureInfo_scoring_date" = EXCLUDED."procedureInfo_scoring_date", "procedureInfo_bidding_date" = EXCLUDED."procedureInfo_bidding_date", "tag" = EXCLUDED."tag", "region" = EXCLUDED."region", "xml_path" = EXCLUDED."xml_path", "fz_name" = EXCLUDED."fz_name"}
([["0376200000417000011" none "2017-03-21T14:41:58.534+03:00" "0105025965" "03762000004" "010501001" {ГОСУДАРСТВЕННОЕ БЮДЖЕТНОЕ УЧРЕЖДЕНИЕ ЗДРАВООХРАНЕНИЯ РЕСПУБЛИКИ АДЫГЕЯ АДЫГЕЙСКАЯ РЕСПУБЛИКАНСКАЯ КЛИНИЧЕСКАЯ БОЛЬНИЦА} {Российская Федерация, 385000, Адыгея Респ, Майкоп г, УЛ ЖУКОВСКОГО, 4} {Российская Федерация, 385000, Адыгея Респ, Майкоп г, УЛ ЖУКОВСКОГО, 4} "Тлишева" "Мариет" "Анатольевна" "arkb.torgi@mail.ru" "7-8772-524957" none none none none none "CU" "ZKP44" "Запрос котировок" none none none "2017-03-22T09:00:00+03:00" {Республика Адыгея, г. Майкоп, ул. Жуковского, 4, каб. 15.} "2017-03-30T10:00:00+03:00" none none "is_lots" "Adygeja_Resp" none "fz44"]])
]
] 1 ?none.cn: switch 'b-ok con st: open cn st/state/window: 10000 st/state/cursor: 'static columns: insert st rejoin ["select top 10000 * from dotes where attributi is not null "] probe a: next st probe a 'attributi > none
attribute is not the first blob column, but I have not retrieved the othersprobe a 'attributi, but that's not your fault: There is a mistake in the docs, it must be pick st 'attributi, I'll fix that immediatly. You pick from the statement port as you do with other port functions.photos: open album: open odbc://albumcn: switch 'b-ok con rw: open st: open cn st/state/window: 10000 st/state/cursor: 'static columns: select rw "select top 10000 * from cf where attributi is not null "
>> invalid id field in spec blockinsert is correct, obviously it was too late yesterday for me :( next) and for TEXTs and BLOBs you get deferred word instead of the value. Then you pick the deferred columns' values.Invalid port actioncn: switch 'b-ok con st: open cn st/state/window: 10000 st/state/cursor: 'static columns: insert st "select top 10000 * from cf where attributi is not null " b: next st probe pick b 'attributi
next must be used, because with copy no cursor would be open and the low level SQLGetData requires a valid cursor. b: next st, so I should be on the statement port!s restart from scratch: is this correct in your documentation?
`next st would return a value of type port!. That's totally not true. It returns a block of rows. >> spec-of :next
== [
"Returns a series at the next index"
series [series! port!]
return: [series! port!]
]next takes a port and returns a series.next or copy or insert etc. on a port! value, a corresponding function next etc. in the port's scheme is called. That can do whatever it want and return whatever it thinks makes sense.next or copy or insert etc. on a port! value, a corresponding function next etc. in the port's scheme is called. That can do whatever it want and return whatever it thinks makes sense.st/state/window: 1halt there?invalid id field in spec blockdata: open st: open cnopen while before I have used only one:cn: switch 'b-ok con data: open st: open cn ;<\\\\\\\\\\\\\\\\\\\\\\\\\\\\
cn: switch 'b-ok con for reasons unbeknownst to me, then on cn you open a statement with st: open cn. I didn't say you should open a "something" from the statement. photos: open album: open odbc://album I open a statement on a connection opened beforhand. Only two opens.open make port! statementcolumns: insert st "select top 10000 * from cf where attributi is not null " result: next st probe pick result 'attributi
insert photos "SELECT FileName, Image, Thumbnail FROM Photos LIMIT 1000"
== [file-name image thumbnail]
images: next photos ;-- retrieve first row
== [
["Me at the beach.jpg" deferred deferred]
]
picture: pick photos 'imagenext on photo statement, like I do in st statementpick result 'attributi, it is `probe pick st 'attributi' pleasephotos!Right truncation of the string or something similar--debug and put everything in a gist. do probe st/state/columns too. Would be too much to post here. probe t/state/columns and st/state/debug?: yes before probe pick st 'attributi and please rereun it a fresh session. The right truncation message is expected, the long data is read in chunks, hence low level SQLGetDate doesn't return SQL_SUCCESS but SQL_SUCCESS_WITH_INFO and that info is, that the chunk just read isn't the whole thing already, so read another chunk ... In the end the should be SQL_SUCCESS though and no message. none values."Attributi" is of SQL type 65526 = SQL_WLONGVARCHAR which should work and with size 1073741823 it's well in the integer! range, too. Why it's choking on the index with SQLSTATE 07009I don't understand.st: open cn: switch 'b-ok con st/state/cursor: 'static probe columns: insert/part st "select top 1 attributi from cf where attributi is not null" 1 st/state/debug?: on probe records: next st probe pick st 'attributi close cn
insert/part is just a shorthand for st/state/window: 1, that's not new.SIZE is expressed as bytes there us something strange. That colum contains attributes. which are composed of a group name, the group members and their flag. Actually max 50 members * 20 characters. Eve if they are stored as XML, I don't thing they could take 1 gigabytes. XML is big, but not do big 😁probe pick st 'attributi to probe pick st 1 beforehand.cn, what is the value of cn/state/info/"getdata-extensions"? It might be limitation of SQL Server, PostgreSQL e.g. describes its capabilities as [any-column any-order] there.insert/part st "SELECT TOP 1 Attributi FROM cf WHERE Attributi IS NOT NULL" 1
insert/part st "SELECT TOP 1 Attributi, CodiceIPA FROM cf WHERE Attributi IS NOT NULL" 1
insert/part st "SELECT TOP 1 CodiceIPA, Attributi FROM cf WHERE Attributi IS NOT NULL" 1
SELECT * FROM with long data columns anywhere in the selected columns, you'll have to explicitly do SELECT NormalCol1, NomalCol2, ..., NormalColN, LongData1, ... LongDataN to put the columns holding long data at the end of the columns list.ut: copy ""
call/output {sqlite3 test.db --csv "select * from demo order by country"} ut
load/as ut 'csv
== [["Alice" "Prof" "CA"] ["Bob" "M" "FR"] ["Eve" "Mx" "MX"]]02D51110: 3c00 7200 6f00 7700 7300 3e00 3c00 7200 6f00 7700 2000 6100 7400 7
400 7200 6900 <.r.o.w.s.>.<.r.o.w. .a.t.t.r.i.
02D51130: 6200 7500 7400 6f00 3d00 2200 3700 3600 2200 2f00 3e00 3c00 7200 6
f00 7700 2000 b.u.t.o.=.".7.6."./.>.<.r.o.w. .
02D51150: 6100 7400 7400 7200 6900 6200 7500 7400 6f00 3d00 2200 3100 3000 2
200 2f00 3e00 a.t.t.r.i.b.u.t.o.=.".1.0."./.>.
02D51170: 3c00 7200 6f00 7700 2000 6100 7400 7400 7200 6900 6200 7500 7400 6
f00 3d00 2200 <.r.o.w. .a.t.t.r.i.b.u.t.o.=.".
02D51190: 3400 3300 2200 2f00 3e00 3c00 7200 6f00 7700 2000 6100 7400 7400 7
200 6900 6200 4.3."./.>.<.r.o.w. .a.t.t.r.i.b.
02D511B0: 7500 7400 6f00 3d00 2200 3300 3300 2200 2f00 3e00 3c00 7200 6f00 7
700 2000 6100 u.t.o.=.".3.3."./.>.<.r.o.w. .a.
02D511D0: 7400 7400 7200 6900 6200 7500 7400 6f00 3d00 2200 3300 3400 2200 2
f00 3e00 3c00 t.t.r.i.b.u.t.o.=.".3.4."./.>.<.
02D511F0: 7200 6f00 7700 2000 6100 7400 7400 7200 6900 6200 7500 7400 6f00 3
d00 2200 3300 r.o.w. .a.t.t.r.i.b.u.t.o.=.".3.
02D51210: 3500 2200 2f00 3e00 3c00 7200 6f00 7700 2000 6100 7400 7400 7200 6
900 6200 7500 5."./.>.<.r.o.w. .a.t.t.r.i.b.u.
02D51230: 7400 6f00 3d00 2200 3100 3000 3700 2200 2f00 3e00 3c00 2f00 7200 6
f00 7700 7300 t.o.=.".1.0.7."./.>.<./.r.o.w.s.
02D51250: 3e00
>.
--------------------------------------------------------------------------------
---------------------------------------------------
{<rows><row attributo="76"/><row attributo="10"/><row attributo="43"/><row attri
buto="33"/><row attributo="34"/><row attributo="35"/><row attributo="107"/></row
s>}
*** Script Error: If is missing its cond argument
*** Where: close
*** Near : close cn If
*** Stack: do-file
>>I have changed :
to
`pick st 1where select CodiceIPA , Attributi and I confirm it CORRECTLY returns none.insert/part st "SELECT TOP 10 CodiceIPA, Attributi FROM cf WHERE Attributi IS NOT NULL" 1 st/state/debug?: on probe records: next st probe records: next st probe pick st 2 ;<<<<< Now it is ok ;-)
[block]
--------------------------------------------------------------------------------
---------------------------------------------------
0035A220: 28c4 3500 c000 8f01 2000 6400 6900 2000 7300 7400 7200 6900 6e00 6
700 6100 2000 (Ä5.À... .d.i. .s.t.r.i.n.g.a. .
0035A240: 6400 6900 2000 6300 6f00 6e00 6e00 6500 7300 7300 6900 6f00 6e00 6
500 2000 6e00 d.i. .c.o.n.n.e.s.s.i.o.n.e. .n.
0035A260: 6f00 6e00 2000 7600 6100 6c00 6900 6400 6f00 2e00 0000 0000 0800 0
008 2a8b 0600 o.n. .v.a.l.i.d.o...........*...
0035A280: 789d 3500 9805 8f01 6f00 6e00 6e00 3a00 2000 3000 3400 3000 3500 3
900 3500 4200 x.5.....o.n.n.:. .0.4.0.5.9.5.B.
0035A2A0: 3800 0000 0000 0000 0200 0103
8...........
--------------------------------------------------------------------------------
---------------------------------------------------
--------------------------------------------------------------------------------
---------------------------------------------------
0035A2E8:
--------------------------------------------------------------------------------
---------------------------------------------------
[
[none deferred]
[none deferred]
[none deferred]
[none deferred]
[none deferred]
[none deferred]
[none deferred]
[none deferred]
[none deferred]
[none deferred]
]
[]
*** Script Error: ODBC error: ["24000" 0 {[Microsoft][Driver Manager ODBC] Stato
del cursore non valido.}]
*** Where: ctx||440~fetch-value
*** Near : pick st 2 close cn If
*** Stack: do-file
>>Connection string not validIf which is missing it's condition argument isn't mine. cn: open ..., not just parts. st: open cn: switch 'b-ok con probe cn/state/info/"getdata-extensions" st/state/cursor: 'static insert/part st "SELECT TOP 10 CodiceIPA, Attributi FROM cf WHERE Attributi IS NOT NULL" 1 st/state/debug?: on probe records: next st probe records: next st probe pick st 2;'attributi close cn
insert/partshorthand was somehow missing, the window therefore was still defaulting to 10 rows. These where fetched with the first next st and for the second next st there were no more rows left because of TOP 10. Hence "invalid cursor state". Pushed fix.insert/part ... 1 as I already have full cursor movements implemented here. After fetching a rowset of the size you like, you'll be able to move and position the cursor in the rowset whereever you want and can then pick the desired column from there. info: 10025 async-notification
actor/open: statement
OPEN-STATEMENT [
SQLAllocHandle 0
allocate fetched, 4 bytes @ 002C0598
]
[block]
SET-STATEMENT [
SQLSetStmtAttr 0
]
actor/insert
*** Script Error: all does not allow logic! for its conds argument
*** Where: all
*** Near : entity
*** Stack: do-file
>>.\console.exe --do script.red > script.log 2>&1 instead of running the script interactively in the Red console?if is missing its cond argument*** Near: close cn If seems to come from your code?!IF camouflaged near a comment with near the same color of the comment.st: open cn: switch 'b-ok con probe cn/state/info/"getdata-extensions" st/state/cursor: 'static insert/part st "SELECT TOP 15 Attributi, CodiceIPA FROM cf WHERE Attributi IS NOT NULL" 10;(1) <<<---- st/state/debug?: on probe records: next st;<<<----- (2) print ["First round, length: " length? records] probe records: next st;<<<----- (3) print ["Second round, length:" length? records] probe records: next records;<<<---- (4) print ["Next on RECORDS, length: " length? records] Print "Attibruti Value:" probe pick st 1;'attributi close cn
10, because it matched the window size. In fact, at the subsequent NEXT at point 3, block length was only 5. NEXT on block instead of a statement, I get 4.probe records: next st
[] as results from the statement because there are no more recordsPICK st 1 when there are no more Records, it gets an INVALID CURSOR STATE from the ODBC driver and not none or a Red error!?*** Script Error: ODBC error: ["24000" 0 {[Microsoft][Driver Manager ODBC] Stato
del cursore non valido.}]
*** Where: ctx||440~fetch-value
*** Near : pick st 1 close cn
*** Stack: do-filegit pull I run next command and got an error: >> do/args %red.r "-c -e --debug %/d/code/zakupki/parser-v2/XMLParser.red"
** Access Error: Cannot open /C/red-odbc/system/red.r
** Near: do/args %red.r {-c -e --debug %/d/code/zakupki/parser-v2/XMLParser.red}red-odbc directory, copy there Rebol open it, and write one of these: (just one)do/args %red.r "-r %environment/console/CLI/console.red"
do/args %red.r "-r --debug %environment/console/CLI/console.red"
do/args %red.r "-r -t Windows %environment/console/GUI/gui-console.red"
*** Script Error: ODBC error: ["HY000" 1 {server closed the connection unexpectedly^/^-This probably means the
*** Where: ctx||610~execute-statement
*** Near : 'any-string!/deep refinement as used in your screen shot is superfluous. Second, insert values of type unset! aren't (and probably won't ever be) supported, I can't image a matching SQL type.probe more, yet you always come with code printing things.pick on "no record". It's not a problem, I will wait for the time when things will stabilize. none! is not an option, as it would be indistinguishable from a NULL value in a column, because these are returned as none. The remaing question is: What difference do you see between the current Script error and a Red error!? There is none.picking by number. Meanwhile, have you tried picking by column number, too? It should work equally well and make your code more readable.pick just because you have initially suggested using it, so I have "obeyed" and on all the following tests I have used pick, when there was the need to debug. But, I prefer using column name too!Pick is a good option, but if coders use attempt to pick a column, they end in the same none return value.attempt to pick column, they would obfuscate the error no matter how it is thrown, now you're contradicting youself, aren't you?attempt is the most used shortcut check about possible errors. But...>> y: [none none!] == [none none!] >> type? tail y == block! >> type? pick tail y 1 == none! >> type? pick y 1 == Word! >> type? pick y 2 == Word!
attempt is not a shortcut to check for possible errors, try with error? is. attempt is a means to hide errors where not interested in failure reasons.none and none! are just words your picking. Picking after tail returns a value of type none!. Red y: reduce [none none!] type? pick y 1 ;== none! type? pick y 2;== datatype! type? pick y 3;== none!
none of type word! would be returned, you wouldn't be able to test a column result for none?. y: [none none!] if you keep it working as Red does, where they are both word!, and past tail pick is none! type, you have a perfect 1:1 match on the way Red handles block content and past tail picking. empty? rows: copy st and you're still picking, you can't blame pick. pick. I am just telling that under Red logic it returns none! of type none! at tail, which differs from none! inside a block which is of type word! if you pick it. So you can distinguish end of rows from column content, just using type?!none? none ;== true way ;)>> y: [none none!] == [none none!] >> none? pick y 1 == false ;<<------
empty? rows: next st.>> y: [none none!] == [none none!] >> if all [word? c: pick y 1 object? context? c none? get c] [print "Yes, I am NONE! type!"] Yes, I am NONE! type! >>
>> all [(first [none]) = (c: pick y 1) none? get c] == true >>
'none = pick y 1'none = pick y 1 but if you know this, why you keep writing: Runtime Error 1: access violationinsert data-insert-statement compose [ (insert-part) ; insert part (prepared-values-part) ; values ]
insert data-insert-statement compose [ (insert-part) ; insert part with values in one query ]
Runtime Error 1: access violation is's possible that it's GC issue, but I am not sure.allocate (which essentially is a call to malloc). For now, I don't check for success on allocate, as it is often to be seen in the Red runtime codebase. Can you split your data into smaller chunks and in between sometimes recycle some memory?recycle between generation each statements and it's seems that it works. At least not crush every time, but I will continue testing....probe more, yet you always come with code printing things.probeВАЖНО: закрытие подключения из-за тайм-аута простоя сеансРafter decoding I got "connection timeout. connection was closed"open statement: open connection odbc://my-dsn change statement [timeout: 2:00] ;-- timeout after 2 hours insert statement ["INSTERT INTO ..." [...] [...] [...] [...] ... [...]]
idle_session_timeout = 120000. But I need test more, it worked only with tiny exampleuniqueand report about it. UNIQUE in context of SQL query argument? UNiQUE in Red and it crashed?*** Runtime Error 1: access violation *** Cannot determine source file/line info. ***
access violation in random places. Sometimes it process few files before crush, sometimes it crush on first.probe compose [ (insert-part) (prepared-values-part) ].print ["-------> " only-name]
resfile: to-red-file rejoin ["d:/code/samples/" only-name]
write/lines/append resfile reduce [insert-part lf prepared-values-part]
insert data-insert-statement compose [
(insert-part)
(prepared-values-part)
]
print "ccc111" -------> fcsNotificationEA44_0173200002520000030_24857441.xml ccc111 *** Runtime Error 1: access violation *** Cannot determine source file/line info. ***
insert data-insert-statement ... statement and instead probe the data to the screen until you see for sure that you've got the format right before throwing it at the database.Red
insert snowwhite [
"INSERT INTO Dwarves (Num, Name) VALUES (?, ?)"
[1 "Dopey"] [2 "Doc"] [3 "Bashful"] [4 "Sneezy"] [5 "Happy"] [6 "Grumpy"] [7 "Sleepy"]
]insert data-insert-statement compose [
{INSERT INTO objects ("lotNumber", "purchaseNumber", "object_code", "object_name", "quantity", "isMedicalProduct", "quantity_undeffined", "price") VALUES (?,?,?,?,?,?,?,?)}
([[0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "100.00" none none 75.0] [0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "100.00" none none 75.0] [0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "100.00" none none 152.0] [0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "50.00" none none 122.67] [0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "2.00" none none 257.33] [0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "2.00" none none 236.67] [0 "0376200006316000009" "22.29.29.000" "Изделия пластмассовые прочие" "1000.00" none none 23.65] [0 "0376200006316000009" "23.19.23.110" "Посуда для лабораторных целей стеклянная" "10.00" none none 98.67] [0 "0376200006316000009" "23.19.26.000" {Изделия из стекла, не включенные в другие группировки} "1000.00" none none 4.63] [0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "2.00" none none 1213.33] [0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "1.00" none none 19349.0]])
]
print "insert done"*** Script Error: ODBC error: [] *** Where: ctx||610~execute-statement *** Near : param *** Stack:
*** Runtime Error 1: access violation *** Cannot determine source file/line info. ***
access violationinsert data-insert-statement compose [{INSERT ...} ([[0 "0376200006316000009" ...]])]argument: compose [{INSERT ...} ([[0 "0376200006316000009" ...]])]
probe new-line/skip collect [foreach row next argument [foreach param row [keep type?/word param]]] on length? second argument
insert datat-insert-statement argumentprobe?word! aren't supported as per documentation. Admittedly, the scheme could fail more gracefully in such a case, but that's polishing. The problem is with the params provided. value argument of insert, not inner param blocks.insert data-insert-statement compose [
(insert-part)
(prepared-values-part)
]prepared-values-part has to reduce each block it returns in the outer block.append/only flat-data/(parent-table-name)/values reduce values-of words-in-block
prepared-values-part is reduced>> type? second reduce values-of #(a: 1 b: none ) == none! ; so it should be reduced correctly
x: [] append x second reduce values-of #(a: 1 b: none ) probe x print type? x/1 none
x/1 here? I can't figure outprepared-values-part *does* reduce. Test withargument: compose [(insert-part) (prepared-values-part)] probe new-line/skip collect [foreach row next argument [foreach param row [keep type?/word param]]] on length? second argument
append/only flat-data/(parent-table-name)/values reduce values-of words-in-block
xxx: values-of words-in-block
foreach x xxx [
probe x
print type? x
]wordinsert-part: {INSERT INTO objects ("lotNumber", "purchaseNumber", "object_code", "object_name", "quantity", "isMedicalProduct", "quantity_undeffined", "price") VALUES (?,?,?,?,?,?,?,?)}
prepared-values-part: [[0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "100.00" none none 75.0] [0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "100.00" none none 75.0] [0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "100.00" none none 152.0] [0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "50.00" none none 122.67] [0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "2.00" none none 257.33] [0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "2.00" none none 236.67] [0 "0376200006316000009" "22.29.29.000" "Изделия пластмассовые прочие" "1000.00" none none 23.65] [0 "0376200006316000009" "23.19.23.110" "Посуда для лабораторных целей стеклянная" "10.00" none none 98.67] [0 "0376200006316000009" "23.19.26.000" {Изделия из стекла, не включенные в другие группировки} "1000.00" none none 4.63] [0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "2.00" none none 1213.33] [0 "0376200006316000009" "20.59.52.190" {Реагенты сложные диагностические или лабораторные, не включенные в другие группировки} "1.00" none none 19349.0]]
argument: compose [(insert-part) (prepared-values-part)]
probe new-line/skip collect [foreach row next argument [foreach param row [keep type?/word param]]] on length? second argument[
integer! string! string! string! string! word! word! float!
integer! string! string! string! string! word! word! float!
integer! string! string! string! string! word! word! float!
integer! string! string! string! string! word! word! float!
integer! string! string! string! string! word! word! float!
integer! string! string! string! string! word! word! float!
integer! string! string! string! string! word! word! float!
integer! string! string! string! string! word! word! float!
integer! string! string! string! string! word! word! float!
integer! string! string! string! string! word! word! float!
integer! string! string! string! string! word! word! float!
]prepared-values-part, not a literal one. With a literal one, what else do you expect than literal values?connection: open make port! [
scheme: 'odbc
target: "driver={PostgreSQL Unicode};server=127.0.0.1;port=5432;database=dbname;uid=postgres;pwd=123"
]either not error? connection [...] [ ... ]help "error"1 / 0 results in an error?if error? try [1 / 0] [print "errror" ]
try work in another manner, it's just try to evaluate block and return result or errroreither not error? try [
connection: open make port! [
; ....
]
print "good connection"
] [
print "bad connection"
]either error? connection: try [open make port! [...]] [handle error] [do something with connection]?2data: I have seen that in Rebol they can be created with to word but in Red give error. It would be nice addition on Red SKIPPED: 0
foreach [key value] flat-data [
sql: rejoin [{INSERT INTO } key " " block-to-quoted-round-brackets/double value/keys " VALUES " generate-values-placeholder length? value/keys ]
unique?: no
foreach el db-uniq-keys-constraints-rules/(parser-state/current-section) [
if ((to-lit-word el/1) = key) [
unique?: yes
append sql rejoin [{ ON CONFLICT } block-to-quoted-round-brackets/double el/2 (generate-do-update-part value/keys)]
] ]
params: either unique? [unique value/values] [value/values]
probe sql
data-insert-statement/state/debug?: yes
foreach row skip params SKIPPED [
types: collect [forall row [keep type? first row]]
probe row probe types
if error? result: try [insert data-insert-statement [sql row]] [
probe result
] ] ]```Red
foreach [key value] flat-data [
sql: rejoin [{INSERT INTO } key " " block-to-quoted-round-brackets/double value/keys " VALUES " generate-values-placeholder length? value/keys ]
unique?: no
foreach el db-uniq-keys-constraints-rules/(parser-state/current-section) [
if ((to-lit-word el/1) = key) [
unique?: yes
append sql rejoin [{ ON CONFLICT } block-to-quoted-round-brackets/double el/2 (generate-do-update-part value/keys)]
] ]
probe sql
params: either unique? [unique value/values] [value/values]
types: collect [foreach row params [forall row [keep type? first row]]]
probe new-line/all params on
probe new-line/skip types on length? first params
data-insert-statement/state/debug?: yes
if error? result: try [insert data-insert-statement compose [sql (params)]] [
probe result
] ]--debug enabled console and redirect the full output of your script with Windows cmd stdout and stderr redirection as in .\console.exe --do path\to\your\script.red > path\to\your\script.log 2>&1. Please provide to log and your script as private gist, these screenshots are really sub-optimal for debugging. float! values to string! values....
params: either unique? [unique value/values] [value/values]
+++ parse params [some [into [some [param: change float! (form first param) | skip]]]]
types: collect [foreach row params [forall row [keep type? first row]]]
...+++, it will form every float value it comes across, leaving all other values untouched.float! bug thing! Pushed a commit which should fix it already.change statement [debug?: yes] before fetching, then examine the raw binary transmitted strings, look out for byte codes of unprintable characters and such.FILE-A and save the output of the query run in the gui-console on FILE-B. I then I have tried difference FA FB in the GUI-Console nd I have received a stack overflow error. recycle/off!mold producing garbage, but I've only encountered 1 garbage byte in last half a year.recycle/on I have again garbage but it if I do a second recycle/off I have still garbage the first run of the query after recycle/off but no more garbage after on the second run an subsequenttable style against different data sources (ODBC, SQLite, CSV, simple JSON, native blocks). I will also press for a simple native datastore later this year. One goal of alt-testing is to narrow down where bugs occur. Another is to flush them out and find real-world limits as they relate to memory use, data conversion, GC, and table style. The style itself will likely not be a bottleneck because it's all virtual data as far as it's concerned.new-lines in the result set to stack-related problems with difference. Are you aware that there *is no* difference between a: head new-line next [1 2] on and b: [1 2]?DIFFERENCE hits the ceiling and give stack overflowgui-console, about 2% of those blocks are split with newlines, while in DOS debug console this does not happendifference to compare the DOS console query and the GUI-Console query saved files, I have discovered I get a stack overflow and this event fixes the GUI-Console which does not split blocks anymore on every instance of the query.newlines between values but you can LOAD this block even if it spans 2 or 4 lines. The newline are not inside values but, as I said, between them.gui-console , with the same ROW in the dos version there is no diference. Trying DOS-BLOCK = GUI-SPLITTED-BLOCK, Red returns TRUEdifference loading the two sets in REBOL2 and the result block is empty. So the only differences are the spurious newlinesnew-line (not a newline) on every row block of a result set fetched in block mode and adds a new-line on every value in the first column of a result set fetched in change statement [flat?: yes] mode *on purpose* for improved readability in the console (whether GUI or not), but adds no new-lines on values in block mode. That is there since your earliest attempts and trials using the scheme, so I don‘t expect that you are talking about these?stack overflow which seems to fix the problemthe-query: "SELECT top 10000 * from dotes" st: open cn: switch 'b-ok con insert st the-query x: copy st save %direct-query.txt x
%direct-query.txt obtained running this code on the GUI console, you get the result with LF in the middle. If you run this code on the debug console, you get a perfect save!stack overflow before running this code in the GUI Console.try block?insert data-insert-statement compose [ (insert-part) (prepared-values-part) ]
Script Error: ODBC error: ["23505" 1 ERROR: repeating key value violates uniqueness restriction I forgot set some DB restriction, but I want to handle this error and print request on with it caused itif error? try [
insert data-insert-statement compose [
(insert-part)
(prepared-values-part)
]
] [
print insert-part ; and some error handling
]if error? set/any 'res try cmd: [1 / 0] [print [mold cmd 'caused mold res]]insert always returning a value, there's no need for set/any:if error? result: try [insert statement sql] [probe result]
picking from a statement port by column word, column name string or column index number is possible, but is restricted to retrieving blobs/texts from the first row only in the current rowset. If your statement/state/window size is greater than 1 and the rowset is positioned on row *n*, with this you're not able to pick from row *n+1* and further. And setting the window size to 1 of course isn't efficent either. cursor: open statement: open connection: open odbc://dsn change statement [window: 1'000] insert statement "select invoiceNr, invoicePdf from invoices" rows: next statement ;-- fetch the first 1'000 invoices at cursor 333 pdf333: pick cursor 'invoice-pdf ;-- but fetch the pdf of row 333 at cursor 444 pdf444: pick cursor 'invoice-pdf ; and row 444 only
open level to grasp ;) Connections, statements, cursors ... all are ports. Have fun! poke cursor 'invoice-pdf load/binary %invoice000333.pdf.--debug mode to see what's going on, additionally or alternatively a statement with statement/state/debug?: on to inspect binary buffers transferred and the usual discipline and systematic approach which Toomas undoubtedly has in abundance ;)poke statement is needed (at least as long as we aren't talking *really large* blobs and texts). Just now I could easily insert BLOB/TEXT of moderate sizes (the source of https://www.red-lang.org and a 150Kb PDF) into a PostgreSQL table in columns of dataype text and bytea without further ado ... port: open csv:///data.csv port: skip port 200 ; go to line 200 in CSV data data: copy/part port 10 ; return 10 records (lines) from CSV
open action, so there is some overhead.%WDIData.csv from your example dataset (around 200MB), scanning takes around 1:20 with buffer size about 0,5-1MB (that’s the optimal size I found). Once the scanning is done, random acces is instant.opening would be much faster. As I wrote, it’s just a PoC now.copy/part returns number of lines and each of these has to be separately csv-loaded. Would it be better/possible to return single string of n lines to be loaded once? Or may be I am missing something.insert/append/write/...). That’s going to be a separate challenge :)>> dt [change wdi: open conn: open make port! [scheme: 'odbc target: rejoin [
[ "Driver={Microsoft Text Driver (*.txt; *.csv)};"
[ {Dbq=C:\Users\Christian Ensel\Development\table\data\WDI\;}
[ "Extensions=asc,csv,tab,txt;"
[ ]] [access: 'dynamic]] :== 0:00:00.0075212
>> dt [insert/part wdi "select * from WDIData.csv" 50] ;== 0:00:00.0468715
>> dt [head wdi] ;-- first 50 rows ;== 0:00:00
>> dt [tail wdi] ;-- last 50 rows ;== 0:00:02.96939
>> dt [close conn] ;== 0:00:00.0625023>> dt [copy wdi] ;== 0:01:22.7897
dt [change wdi: open conn: open odbc://text [access: 'dynamic]] ;== 0:00:00.0109964 dt [insert/part wdi "select * from WDIData.csv" 50] ;== 0:00:00.038935 dt [head wdi] ;== 0:00:00.0129694 dt [tail wdi] ;== 0:00:04.114 dt [close conn] ;== 0:00:00.0496009
tests\source\environment\odbc-test.red (even if a bit contrived by times to suit the test suite)User Error: "Data are not aligned"CSV dataset saved from SQL Server into RED110 110 110 112 112 110 110 110 112 110
serve123r\server123;mycompany\ammin;In Volus (i.e. 3 elements) gets duplicated in longer lines instead of single integer in shorter lines. So, 2 excess fields. Is the duplication intentional?;!Red
>> collect [foreach row rows: [[1 "a;b"] [2 "c;d"]] [keep to-csv/with/quote row #";" #"^""]]
== [{1;"a;b"^/} {2;"c;d"^/}]rows: [[1 "a;b"] [2 "c;d"] fetched thru ODBC ...to-csv/with/quote rows #";" #"^"" is enough. Result is one big string.*** access violation when wildly clicking thru different tables — is that the symptom you've discovered?%console.red with Needs: [View JSON CSV ODBC] header including the View component with --debug switch do/args %red.r "-r --debug %environment/console/CLI/console.red".load/as ... 'csv creates a block where each line is a block with a big string inside. Is there the need to change the delimeters on load?help "csv", help load-csv. Chances are that in order to successfully parse the CSV the same delimiters have to be used as the ones used in the file.schema.ini file in folder where the csv is placed. More info here: https://docs.microsoft.com/en-us/sql/odbc/microsoft/schema-ini-file-text-file-driver?view=sql-server-ver16 fire an error, firing that error crashes ... :thinking: :dizzy: :zany_face:vvv EXECUTE-STATEMENT FAILED vvv *** Runtime Error 1: access violation *** in file: /C/Users/Christian Ensel/Development/red/runtime/datatypes/common.reds *** at line: 100 *** *** stack: red/copy-cell 04C92FD8h 01FBA548h *** stack: red/error/create 01D2D1A0h 01D2EA08h 04C92FD8h 058E5620h 00000000h *** stack: red/fire 0 010AE560h *** stack: ctx||544~execute-statement 0336EA84h *** stack: red/interpreter/exec-routine 0336EA74h
schema.itI can bypass this problemschema.ini*** stack: red/fire 0 010AE560h
fire? The first parameter of red/fire should be at least 2.geometry:*** Script Error: ODBC error: ["HY000" 517 "[MySQL][ODBC 8.0(w) Driver][mysqld-8.0.29]"] *** Where: execute-statement *** Near : port/state describe-result port/state *** Stack: do-file context view do-events do-actor do-safe
SELECT statement's result columns list — because of that datatype not being supported by ODBC at all from what I see, e.g. https://stackoverflow.com/questions/52058228/odbc-prepared-statement-for-table-with-a-geometry-column-type-throws-exception#comment91067758_52058228. It's a good thing at least to me that it throws a (not so particular) error instead of crashing, the hopes however that unknown columns will just be returned as a string (I do that/try that) have not been fulfilled in this case. *** access violation errors?access violation errors. fire is a block, I think the call itself is legit: https://github.com/gurzgri/red/blob/5e5bc196cebc235b7434a4f619b0c9fc26d5c916/environment/schemes/odbc.red#L1622 with odbc/odbc: word/load "ODBC" and odbc/common-field-errorsbeing an enum defined in https://github.com/gurzgri/red/blob/5e5bc196cebc235b7434a4f619b0c9fc26d5c916/environment/schemes/odbc.reds#L2701 Needs: [View JSON CSV ODBC] header including the View component, then rerun your retrieval code and see if saving results intruduces the LF problem. If so and if it reliably doesn't happen when View is not Needs:ed, that I think would point to a problem with/in the View engine itself and not so much the GUI console.binary! values like #{00000000010100000000000000000024400000000000003440}, select ST_AsText(g) as geo from gis_geometry and select cast(ST_AsText(g) as varchar(256)) from gis_geometry.deferred columns thing is a bit of a rushed design I'm not too happy with, but was a low hanging fruit to provide a way for Giuseppe to retrieve e.g. the PDF invoices from BLOB columns of his coporate database. system/schemes/odbc/state/sources function already? It lists all datasources configured in the (32bit) odbc panel. That way you wouldn't have to hardcode them. And then theres insert statement [table], listing all tables. columns: insert change stmt: open conn: open odbc://dsn [flat?: yes] 'tables probe tables: copy stmt probe columns close conn lists all tables in datasource agnostic way, provides columns [table-cat table-schem table-name table-type remarks].Red >> databases: unique extract/index tables length? columns 1 == ["information_schema" "mysql" "performance_schema" "sample" "sys"]
db: "sample" insert stmt compose [tables (db)] copy stmt
compose it. Not a problem, it can be explicity composed, just a notice.strict mode for now. It's *very* picky if not buggy and it sometimes requires an "" empty string instead of none:>> insert maria [tables none none "gis_%line"] ;== finds me two tables >> insert maria [tables none "" "gis_%line"] ;== finds no tables >> insert maria [strict tables none "" "gis_%line"] ;== finds no tables (which is correct), but >> insert maria [strict tables none "" "gis_line"] ;== finds no tables (in my case I somehow think it should), but >> insert maria [strict tables none none "gis_%line"] ; throws error HY009 "Invalid use of null pointer", which is far from obvious
mariadb behave without explicitly pointing out which database I want to see. Radio box at right changes view from data to columns, but again mariadb crashes on this transfer (occasionally sqlite too). Others seem to be stable.-odbc does it mean that it it now have another name? Or I am looking at wrong place?odbc branch has been merged into master of [gurzgri/red](https://github.com/gurzgri/red) ColumnSize of 536'870'911 WCHARs, i.e. 1 GByte for column "TYPE_NAME", leading to a system/cpu/overflow? when trying to preallocate a column buffer sized 1GByte × window size (1024 rows). 0 and a *** access violation being the inevitable result when [SQLFetchScroll](https://docs.microsoft.com/en-us/sql/odbc/reference/syntax/sqlfetchscroll-function) was trying to write into that buffer.'columns catalog function with MariaDB, because with that check in place it will always cause-error 'internal 'limit-hit. Which is better than crashing, but ... sigh. change connection [catalog: "database"]. Microsoft Text Driver doesn't like doing so if a statement has been opened already. Other datasources may be even pickier, not allowing doing so *after* the connection itself has been established. And there is currently no way to do so *before* the connection connects with the scheme.try-odbc-only.red [gist](https://gist.github.com/gurzgri/a75c23b5e32fda1d3a5d7f62c23ecb46), with said MariaDB limitation (MySQL, SQLite not tested).send-startup-info [SUCCESS] PostgreSQL Connection odbc-count-request total count(*): 3207408 .... Generation INSERT STATEMENT DELETE: DELETE FROM notifications WHERE "purchaseNumber" = '0101300011118000042'; *** Script Error: ODBC error: [] *** Where: execute-statement *** Near : port/state describe-result port/state *** Stack: view layout layout layout do-safe parser-init start-stop-processing error? xml-for-processing-request time-it try-to-insert-data-to-db data-to-sql flat-data-to-prepared-insert
port/state describe-result port/state ?delete-sql: none
either ((type? idata/2) = string!) [
delete-sql: rejoin [{DELETE FROM } key { WHERE "} to-string idata/1 {" = '} idata/2 {';}]
] [
delete-sql: rejoin [{DELETE FROM } key { WHERE "} to-string idata/1 {" = } idata/2 {;}]
]
print "DELETE: "
print delete-sql
insert data-insert-statement delete-sql
print "detete done"DELETE: DELETE FROM notifications WHERE "purchaseNumber" = '0126300035817000408'; *** Script Error: ODBC error: [] *** Where: execute-statement *** Near : port/state describe-result port/state *** Stack: view layout layout layout do-safe parser-init start-stop-processing error? xml-for-processing-request time-it try-to-insert-data-to-db data-to-sql flat-data-to-prepared-insert
INSERT statement work on both servers without error. The problem only with DELETE.delete-sql: rejoin [{DELETE FROM } key { WHERE "} to-string idata/1 {" = ?} ]
print "delete start"
insert data-insert-statement compose [ (delete-sql) (idata/2) ]
print "DELETE: "delete start *** Script Error: ODBC error: [] *** Where: execute-statement *** Near : port/state describe-result port/state *** Stack: view layout layout layout do-safe parser-init start-stop-processing error? xml-for-processing-request time-it try-to-insert-data-to-db data-to-sql flat-data-to-prepared-insert
delete start
INSERT actor
OPEN? actor
FREE-PARAMETERS [
]
FREE-COLUMNS [
]
FREE-STATEMENT [
SQLFreeStmt 0
SQLFreeStmt 0
SQLFreeStmt 0
]
PREPARE-STATEMENT [
SQLPrepare 0
]
BIND-PARAMETERS [
1 rows of 1 params
allocate status/value, 4 bytes @ 0509B4F0
SET-STATEMENT [
SQLSetStmtAttr 0
]
SET-STATEMENT [
SQLSetStmtAttr 0
]
prm 1
slotlen? row 1/1
TYPE_OF(7)
slotlen = 40
slotlen = 40
allocate buffer, 40 bytes @ 0509A168
allocate lenbuf, 4 bytes @ 0509B520
populate row 1/1
TYPE_OF(7)
prm 1
C-type 65528
SQL-type 65527
col-size 40
digits 0
buffer 0509A168
slotlen 40
lenbuf 0509B520
SQLBindParameter 0
]
EXECUTE-STATEMENT [
SQLExecute 100
*** Script Error: ODBC error: []
*** Where: execute-statement
*** Near : port/state describe-result port/state
*** Stack: view do-events do-actor do-safe start-stop-processing error? xml-for-processing-request time-it try-to-insert-data-to-db data-to-sql flat-data-to-prepared-insertif error? result: try [
insert data-insert-statement compose [ (delete-sql) (idata/2) ]
] [
probe result
]make error! [
code: 350
type: 'script
id: 'bad-bad
arg1: 'ODBC
arg2: []
arg3: none
near: [port/state
describe-result port/state]
where: 'execute-statement
stack: 16062188
] SQLExecute 100 means executing the statement returns SQL_NO_DATA. I've by myself stumpled upon that recently and had fixed it already, but hadn't yet pushed the fix. Made up for just now, please pull again.SELECT but INSERT, UPDATE and DELETE statements, too. If your table template allows for some sort of "on-cell-change" handlers, writing back edits to the database should be quite possible.SQLExecute 100 means executing the statement returns SQL_NO_DATA. I've by myself stumpled upon that recently and had fixed it already, but hadn't yet pushed the fix. Made up for just now, please pull again.$ git reset --hard 46f1ea366f03204f78e79b9113794017965a1334 HEAD is now at 46f1ea366 FIX: addtional fix for issue #5134
do/args %red.r "-r %environment/console/CLI/console.red"
Compiling d:\code\app.red ... ...using libRedRT built on 21-Jun-2022/14:16:13+3:00 *** Compilation Error: module not found: ODBC *** in file: d:\code\app.red *** near: []
Runtime Error 1: access violation
Red [
Needs: [View ODBC]
]
connection: try [
open make port! [
scheme: 'odbc
target: "driver={PostgreSQL Unicode};server=127.0.0.1;port=5432;database=db;uid=postgres;pwd=1234567"
]
]
either not error? connection [
xml-files-statement: open connection
data-insert-statement: open connection
print "[SUCCESS] PostgreSQL Connection"
] [
print "[ERROR] PostgreSQL Connection"
print connection/arg2 ; error text
]
data-insert-statement: open connection
insert-part: {INSERT INTO applications ("lotNumber", "purchaseNumber", "admitted", "appNumber", "appRating", "increaseInitialPrice", "offer_price", "firstOffer_price", "firstOffer_increaseInitialPrice", "lastOffer_price", "lastOffer_increaseInitialPrice", "bestPrice", "finalPrice", "journalNumber", "lastOfferPrice", "participant_contactEMail", "participant_address_post", "participant_address_fact", "participant_phoneNumber", "participant_fullname", "participant_inn", "is_individualPerson", "individualPerson_address", "individualPerson_phone", "individualPerson_contactEMail", "individualPerson_inn", "winnerPrice", "appRejectedReason") VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}
prepared-values-part: [
[0 "0176200005516000136" true none none none none none none none none none none "1" none none none none none none none none none none none none none {Some text}]
[0 "0176200005516000136" true none none none none none none none none none none "3" none none none none none none none none none none none none none none]
[0 "0176200005516000136" true none none none none none none none none none none "4" none none none none none none none none none none none none none none]
[0 "0176200005516000136" none none none none none none none none none none none "5" none none none none none none none none none none none none none none]
]
print "start"
insert data-insert-statement compose [
(insert-part)
(prepared-values-part)
]
print "end"[SUCCESS] PostgreSQL Connection start *** Script Error: values must be of the same type *** Where: do *** Near : args *** Stack: cause-error
*** Runtime Error 1: access violation *** at: 54B6282Eh
access violation error, but example just show error above.*** Script Error: values must be of the same type error is intentionally thrown when the paramaters in a given parameter column are of different types (not counting none, which *is* allowed). Of course that error should *not* result in a *** Runtime Error 1: access violation — is that really what's happening (script error + access violation)? I'll have to investigate wether this is a possible situation.*** Script Error: values must be of the same type here is probably your prepared-values-part: Could it be that true and none are again of type word! instead of type logic! and none!? If so, the "appRejectedReason" parameter would consist of string! and word! and the parameters would be rejected. INSERT DONE!!!
---------------------
INSERT INTO applications ("lotNumber", "purchaseNumber", "admitted", "appNumber", "appRating", "increaseInitialPrice", "offer_price", "firstOffer_price", "firstOffer_increaseInitialPrice", "lastOffer_price", "lastOffer_increaseInitialPrice", "bestPrice", "finalPrice", "journalNumber", "lastOfferPrice", "participant_contactEMail", "participant_address_post", "participant_address_fact", "participant_phoneNumber", "participant_fullname", "participant_inn", "is_individualPerson", "individualPerson_address", "individualPerson_phone", "individualPerson_contactEMail", "individualPerson_inn", "winnerPrice", "appRejectedReason") VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
[[0 "0176200005516000137" true none none none none none none none none none none "1" none none none none none none none none none none none none none "sdfddd dddddddd"] [0 "0176200005516000137" none none none none none none none none none none none "2" none none none none none none none none none none none none none none]]
------------------------------------------------------
*** Runtime Error 1: access violation
*** at: 54B6282EhINSERT INTO applications ("lotNumber", "purchaseNumber", "admitted", "appNumber", "appRating", "increaseInitialPrice", "offer_price", "firstOffer_price", "firstOffer_increaseInitialPrice", "lastOffer_price", "lastOffer_increaseInitialPrice", "bestPrice", "finalPrice", "journalNumber", "lastOfferPrice", "participant_contactEMail", "participant_address_post", "participant_address_fact", "participant_phoneNumber", "participant_fullname", "participant_inn", "is_individualPerson", "individualPerson_address", "individualPerson_phone", "individualPerson_contactEMail", "individualPerson_inn", "winnerPrice", "appRejectedReason") VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
[[0 "0176200005516000137" true none none none none none none none none none none "1" none none none none none none none none none none none none none {Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.}] [0 "0176200005516000137" none none none none none none none none none none none "2" none none none none none none none none none none none none none none]]Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Runtime Error 1: access violation and 1 success foreach row prepared-values-part [probe collect [foreach prm row [keep type? prm]]] there, post the output here.here on screenshot? INSERT INTO applications ("lotNumber", "purchaseNumber", "admitted", "appNumber", "appRating", "increaseInitialPrice", "offer_price", "firstOffer_price", "firstOffer_increaseInitialPrice", "lastOffer_price", "lastOffer_increaseInitialPrice", "bestPrice", "finalPrice", "journalNumber", "lastOfferPrice", "participant_contactEMail", "participant_address_post", "participant_address_fact", "participant_phoneNumber", "participant_fullname", "participant_inn", "is_individualPerson", "individualPerson_address", "individualPerson_phone", "individualPerson_contactEMail", "individualPerson_inn", "winnerPrice", "appRejectedReason") VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
[[0 "0176200005516000137" true none none none none none none none none none none "1" none none none none none none none none none none none none none {Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.}] [0 "0176200005516000137" none none none none none none none none none none none "2" none none none none none none none none none none none none none none]]
------------------------------------------------------
[integer! string! logic! none! none! none! none! none! none! none! none! none! none! string! none! none! none! none! none! none! none! none! none! none! none! none! none! string!]
[integer! string! none! none! none! none! none! none! none! none! none! none! none! string! none! none! none! none! none! none! none! none! none! none! none! none! none! none!]
heredate-insert-statement/state/debug?: on then in this particular line and let's see how the param buffer looks like.recycle/off
... INSERT DONE!!! INSERT done D:/zak_data/fz44/protocols/Adygeja_Resp/protocol_Adygeja_Resp_2016030100_2016040100_001/fcsProtocolEF1_0176200005516000137_7929152.xml *** Script Error: ODBC error: [] *** Where: free-statement *** Near : port/state *** Stack: view layout layout layout do-safe parser-init start-stop-processing error? xml-for-processing-request time-it try-to-insert-data-to-db
applications was inserted.access violation again04F94628: 4c00 L. 04F948B8: 6900 6100 ffff ffff i.a.ÿÿÿÿ
(length? {Lorem ipsum ... est laborum.}) + 1 << 1 ;== 892 bytes big. And I might already have an idea why. It's probably an artefact of the none param in row 2. I'll have a look.allocate/free calls with direkt calls to HeapAlloc/HeapFree. And .. SQLite results suddenly could be retrieved. Of course I didn't thought the underlying problem as being solved though, it was more like post-poned. But the problems went away and I ccontinued with cleaning up and refactoring the code. Fine.GetProcessHeap default heap didn't help either. So I finally wrote my own heap walker function which allowed me to have an eye on every allocation I've made and surrounded all API calls with functions inspecting the Heap validity before and after calling them.SQLDescribeCol and wasn't intact after. SQLDescribeCol that the buffer to return a column name in was len wide chars big whilst it really was only len bytes big (half the size). The buffer still was large enough to keep all column names I've encoutered so far, but the driver wrote past its end anyways. allocate/free again ...s.importodare is translated to importo-dare in Red ODBC? (note, s is an alias)My Column which are then translated to my-column but why on MyColumn too?{
select
TABLE_name as TableName,
Column_Name as ColumnName,
Data_Type as Datatype,
Character_Maximum_Length as CharacterMaximumLength
from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME = $$TABLE-NAME$$
}statement/state/columns block keeps an internal copy of such information with layout as defined from https://github.com/gurzgri/red/blob/3289753aef4a15f7bdc9e3e9092ca40bd9ab15c5/environment/schemes/odbc.reds#L2738 on. Or consider having a look into docs, there are dedicated means for such things.insert there. You have an error on copy there. Which is expected and fully legit: First you manipulate data with an SQL INSERT statement and throw away / ignore the result insert returns for that (the number of rows inserted). Then you immediatly try to fetch rows with copy without having queried for any with an SQL SELECT statement first. Hence: invalid cursor state.insertINSERT INTO xeventslog (type) VALUES ("Record") just [invalid SQL](https://stackoverflow.com/questions/1992314/what-is-the-difference-between-single-and-double-quotes-in-sql)?*** Script Error: ODBC error.the-query: {insert into xeventslog (type) values ("Record")}
st: open cn: switch 'rd con
insert st the-querythe-query: {insert into xeventslog (type) values ($$Record$$)}$$-quoting a thing with SQL Server or is it something you‘ve invented? Never came across that so far.$$ as delimiters for placeholders. The word in the middle and placeholder is replaced with the corresponding Red word. So I have discovered that if you make an error in the column name, then everything crashes.apply-values so you can:apply-values the-string make object! [record: "Record type 22"]
record in string and its delimiters is replaced with the content of the word record in the context. In this situation the string would be converted to :the-query: {insert into xeventslog (type) values ('Record type 22')}`>> insert chinook {insert into Genre (GenreId, Name) values (99, $$Record$$)}
*** Script Error: ODBC error: ["HY000" 1 "parameter marker count incorrect (1)"]"Record" it inserted as if I'd given it as 'Record'.>> insert depot {insert into publishers (publisher_id, name) values ('bad0bad0bad0', "Record")}
*** Script Error: ODBC error: ["42703" 1 {ERROR: column "Record" does not exist;^/Error while preparing parame$$Record$$ it inserted as if I'd given it as 'Record'.unique that I heavy use in my project.change statement [bookmarks?: on]. It retrieves the rows with an additional first column being a bookmark column by which the db server can track/identify rows without them being part of the table themself. do/args %red.r "-r %environment/console/CLI/console.red"
>> do/args %red.r "-c -e %/d/code/zakupki/XMLParser/XMLParser.red"
Script: "Red command-line front-end" (none)
Script: "Encap virtual filesystem" (21-Sep-2009)
Script: "Get git version data" (none)
-=== Red Compiler 0.6.4 ===-
Compiling d:\code\zakupki\XMLParser\XMLParser.red ...
Compiling libRedRT...
...compilation time : 1429 ms
Compiling to native code...
Script: "Red/System PE/COFF format emitter" (none)
...compilation time : 47453 ms
...global words : 22935 (69.72%)
...linking time : 545 ms
...output file size : 1769472 bytes
...output file : C:\odbc-new\libRedRT.dll
...compilation time : 1248 ms
Target: MSDOS
Compiling to native code...
*** Compilation Error: undefined symbol: red/string/load-in
*** in file: %/C/odbc-new/environment/schemes/odbc.reds
*** in function: exec/odbc/diagnose-error
*** at line: 2969
*** near: [
string/load-in as c-string! state 5 errors UTF-16LE
integer/make-in
]do/args %red.r "-r %environment/console/CLI/console.red" do/args %red.r "-c -e %/d/code/zakupki/XMLParser/XMLParser.red
>> connection: open make port! [
[ scheme: 'odbc
[ target: "driver={PostgreSQL Unicode};server=127.0.0.1;port=5432;database=fz44;uid=postgres;pwd=12345"
[ ]
*** Script Error: ODBC error: ["IM002" 0 {[Microsoft][ ODBC] Data source name not found and no default driver specified
*** Where: either
*** Near : {driver={PostgreSQL Unicode};server=127.0}
*** Stack:PostgreSQL ODBC Driver(UNICODE)
connection: open make port! [
scheme: 'odbc
target: "driver={PostgreSQL ODBC Driver(UNICODE)};server=127.0.0.1;port=5432;database=fz44;uid=postgres;pwd=12345"
]system/schemes/odbc/state/drivers returns a driver name other than the one listed in the admin panel and the one required to connect. And impossible to tell, too, because it looks as if by now the original output (which was posted here only incomplete anyways) has been deleted.[[123]]ctrl+f search in the docs?flat rowSELECT query just as good as any other. Only for INSERT, UPDATE, DELETE queries the number of affected rows is returned as an integer.