I have a lot of PDF files on my server that are product guides and documentation. So many in fact, that I often want to search for a keyword but don't know which manual contains it. Adobe's Acrobat Reader can do this, but it's not supported on Linux (and it's not always a "friendly" program). In researching Linux solutions I came across what seemed like a usable solution that has a UI in addition to lots of features: recoll.
So, I added it (apt install recoll) and after adding it and the needed support files launched it. The first thing it needs to do is index the files. I have (at this writing) just under 500,000 files of all sorts (system, libraries, binaries, etc.) on my server. I just let it go without specifying just PDF files (who knows, if this works as advertised I may want to search scripts, document files, etc.).
I minimized the program and let it run. After a while I did a search and it produced usable results.
But the impact on my system was deeply felt. My entire system got terribly sluggish, even though 'top' and 'ps' and other utilities (like Stacer, bashtop, etc.) reported nothing untoward. I spent the last three days conducting a training, and it was almost painful to wait for screen updates and such. I finally stopped all recoll indexing process, removed the program and deleted my .recoll folder.
My system is back to normal, and I've found I can just use the command-line pdfgrep to perform my searches.
This isn't a call for help, but a posting of my experience. If you use recoll and it doesn't drag your system into molasses-mode, I'd be curious how you did it.