I posted this here in because I don't really think this is a help topic, but merely looking for suggestions and/or recommendations.
Using czkawka (a wonderful tool for locating duplicate files) I wound up with a large set of data that when saved results in a JSON file. I know that programming tools like geany can read and "beautify" JSON files, but I'm hoping to extract simply the full path of each file and write it out to another (hopefully text) file, so I can then review the duplicates and decide on a pruning method.
I'm thinking the simplest, although not necessrily the easiest method, is to use sed to write out the line containing the "path" specification to a text file, but I wonder if there are automated tools that can do this?
I am not a javascript programmer, so native JSON doesn't do much for me. Reading a large (5.7Mb) file line by line just seems inefficient and time-consuming.
Any suggestions?
P.S. I know I can do this with both grep and sed because I've tried both, but is there an easier way?