Mrc-converter-suite: convert CSV to SECURE NOTE is mixing order of ORIGINAL FIELDS [Solved]
I want to convert data from EXCEL CSV to 1Password using mrc-converter-suite (version 2023-09-23-1000).
My data are hundreds of records - minimized/changed to 5 items for this question.
Structure : TITLE + 5 additional fields.
I want to import it to 1Password as “secure notes”.
It works pretty good with the Title (“Title” in 1Password Secure Notes item) and the 5 additional colums (“Original Fields” in 1Password Secure Notes item).
TestData.CSV file:
Title,id (hex),DataVersion,DosXExtender,CardType,KunY (dez)
A_Company,4519,2.0,2 / 256,PRIME,5203
D_Company,350E,2.0,2 / 256,PRIME,3651
S_Company_New,450E,2.1,4 / 1024,PRIME,3651
B_Company,9999,2.0,2 / 256,PRIME,10
Z_Company,2BF1,2.0,2 / 256,PRIME,4048
perl convert.pl csv ..\TestData.csv -v --imptypes note -a --ignoretimestamps -d
In debug output its already visible that the order of the “additional fields” is changed/mixed:
main : Running convert.pl from 'C:/Users/skoch/Desktop/mrc-converter-suite'
main : convert.pl arguments: csv ..\TestData.csv -v --imptypes note -a --ignoretimestamps -d
main : Loaded generator: MCS::Generators::PUX
print_fileinfo : Input file info: "..\TestData.csv"
print_fileinfo : size: 262
import : ROW: 1
normalize_records : field: id (hex)
normalize_records : added custom field: note, _note_custom_1, id (hex), 4519
normalize_records : field: cardtype
normalize_records : added custom field: note, _note_custom_2, cardtype, PRIME
normalize_records : field: dosxextender
normalize_records : added custom field: note, _note_custom_3, dosxextender, 2 / 256
normalize_records : field: dataversion
normalize_records : added custom field: note, _note_custom_4, dataversion, 2.0
normalize_records : field: kuny (dez)
normalize_records : added custom field: note, _note_custom_5, kuny (dez), 5203
print_record : title: A_Company
tags:
key(_note_custom_5): kuny (dez) = 5203
key(_note_custom_4): dataversion = 2.0
key(_note_custom_3): dosxextender = 2 / 256
key(_note_custom_2): cardtype = PRIME
key(_note_custom_1): id (hex) = 4519
notes:
import : ROW: 2
normalize_records : field: dosxextender
normalize_records : added custom field: note, _note_custom_6, dosxextender, 2 / 256
normalize_records : field: cardtype
normalize_records : added custom field: note, _note_custom_7, cardtype, PRIME
normalize_records : field: id (hex)
normalize_records : added custom field: note, _note_custom_8, id (hex), 350E
normalize_records : field: kuny (dez)
normalize_records : added custom field: note, _note_custom_9, kuny (dez), 3651
normalize_records : field: dataversion
normalize_records : added custom field: note, _note_custom_10, dataversion, 2.0
print_record : title: D_Company
tags:
key(_note_custom_10): dataversion = 2.0
key(_note_custom_9): kuny (dez) = 3651
key(_note_custom_8): id (hex) = 350E
key(_note_custom_7): cardtype = PRIME
key(_note_custom_6): dosxextender = 2 / 256
What is not very nice for me: the ORIGINAL FIELDS order is changed : id (hex) – cardtype – dosextender – dataversion – kuny (dez) are not always in the same order within one SECURE NOTE item. Selecting the next item in vauld offers another order of fields.
What I’m interested in:
IS THERE A POSSIBILITY TO CONTROL THE CONVERTION PROCESS TO MAKE THE ORIGINAL FIELDS ALWAYS TO HAVE THE SAME ORDER (OF ITEMS)?
DETAILS - WHAT I DID:
Original Excel:
Saved as TestData.csv:
perl convert.pl csv ..\TestData.csv -v --imptypes note -a --ignoretimestamps -d
In debug output its already visible that the order of the “additional fields” is changed/mixed:
Import PUX file to 1Password App
In 1Password App it is still the same (of course) - different order of fields (when selecting different items).
1Password Version: 8.10.18
Extension Version: Not Provided
OS Version: Windows 10
Browser: Not Provided
Comments
-
Sorry for the delay. I didn't see this post until late in the day.
Good debugging.
I've posted a new Csv.pm converter in the Custom area at the converter suite's download home. Replace your Converters\Csv.pm with this new one and try the conversion. Here are my results:
perl convert.pl csv -v --imptypes note skoch6030.csv
You don't need the
--addfields
option, since that is the default for this converter.The
--ignoretimestamps
option is not necessary or useful for your CSV file, since it has neither a Modified or Created column. Record timestamps are always created by the 1PUX generator. They will either by those available in the password manager's export (or the CSV data in this case), or when none are available, the time of conversion is used.Let me know how this works out for you.
Edit: FYI - With a few customizations to this converter, you could have your fields placed in a named section (or sections) instead of "Original Fields". Example:
0 -
Thank you very much - with the new CSV.PM it works as expected !!
What about the section thing? Is it an upcoming feature or am I already able to use the sections?
...thanks again for your great tool and your immediate help !0 -
You're welcome. Thanks for the feedback. This change will go into the next update I post.
The suite was designed so that customizing a converter is usually rather simple. I added a new entry into the Csv.pm converter's table of category definitions to match your requirements. I'll explain:
The new entry is highlighted in yellow. Each of the 5 internal rows define the fields within the category.
The red arrows point to the name new input category name snote (i.e. used by
--imptypes
) and the 1Password category to which I've mapped this new category definition (i.e. a Secure Note). Thetype_out
attribute in the definition is set to note, which is the converter suite's output type (usable with--exptypes
) for Secure Note.The green arrow points to the internal field key that 1Password will use for the entry. I use underscore to designate keys not native to 1Password's category templates.
The highlighter-yellow arrow points to a number, in this case a 0 or a 1, to indicate whether or not a pattern match of this field definition will trigger category detection. More below.
The purple arrow points to the patterns used to match the data in the export file. I used regular expressions here (they could be strings). The
i
at the end of the regular expression indicates the matching should be case-insensitive. This is required because the csv converter lower-cases all field names (i.e. CSV column labels). I could have instead just used lower-case patterns.The blue arrows point to the section name definitions for the corresponding entry and the field labels that 1Password will use. You'll notice that the section names are two parts, separated by a period
.
- the first part is a unique internal name for the section, and the second part is the label that 1Password will use for the record. Since 1Password upper-cases its section names, I just defined these using lower case. Also notice that I used mixed-case in the field label names, to show that uppercase and lowercase labels are possible (see the screenshot in my previous post above).The convert suite works by matching the export data against the definitions in the table. It processes one export record at a time, and for each field in the record, examines each category definition, and then each field definition within, trying to make a match. When a pattern match is made (purple arrow), if there is a non-zero value for the category detection value (highlighter-yellow) then this input category becomes a candidate (the exact mechanism is specific to each converter). This csv converter requires just one (unique) pattern match, and when a match is made, it triggers a category match, and the converter will use this import category (
--imptypes
). It will map each export field from the record as per the field definitions in that category definition.As per your request, I'd created this new definition to map to a 1Password Secure Note. However, these custom definitions could have been added appropriately to any existing category definition, and the data would get added to the corresponding 1Password category (e.g. Software License).
I uploaded my custom version of the Csv.pm code to the Custom area. That custom converter is called Csvs.pm, and would be called by the name csvs. Example command line:
perl convert.pl csvs -v skoch6030.csv
Notice that I omit the
--imptypes note
- this is important so that you don't force the category detection to the input category note, but rather allow it to match the new snote.Feel free to ping me with addition questions or help.
0 -
Works great. I took one hour to understand the mechanism and to play around a bit. Then I customized 'snote' for my needs with 3 sections + values that have no section.
Thanks a lot. Very flexible !0 -
0