Extract everything from PDF [closed]
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this questionLooking for solution to extract content from a PDF file (using console tool or a library).
It will be used on server to produce on-line e-books from uploaded PDF files.
Need to extract following things:
- text with fonts and styles;
- images;
- audio and video;
- links and hotspots.
- page snapshots and thumbnails;
- general PDF information, e.g. book layouts, number of pages etc.
Looking at Adobe PDF Library ($5000 though), BCL SDK (?), PDFLib (€795), QuickPDF ($250)
Now we are using open source pdf2xml (extracts text, images and links) and GhostScript (snapshots and th开发者_如何学编程umbnails). The other things left are:
- fonts;
- multimedia;
- hotspots;
- page info.
We are hesitating between paying a lot of money (and possibly make mistake choosing wrong solution) or use free/open source solutions.
Which BEST solution to extract nearly everything from PDF would you recommend?
Any comments will be much appreciated.
Sounds like with a few days or weeks effort, you can adapt the open source tools to your needs. Fonts and everything can certainly be extracted, this is something that every PDF reader must do anyway to display them.
You should probably take an estimate of programmer costs ($/hr) and multiply it by the estimated time it would take to add the needed open source functionality (60-80 hours?). If this is greater or close to $5000 anyway, you might consider just buying the commercial software.
Otherwise, with the help of the (quite good) PDF reference, you should be well on your way.
One more thing, you might find Poppler to be of help. It is for rendering PDF, but that is very related to what you are trying to do.
A: Font: I dont think fonts can be extracted.
B: Not sure about multimedia
C: What are hotspots?
D: Have a look at iTextSharp (open source), you might be able to extract more page info.
There is also PDF Suite that contains 3 SDKs especially designed to extract content from PDF, render PDF as image and convert to html. Though no font files extraction but it supports XML output and text extraction preserving the original layout.
There is a "PDF Multitool" free utility that is based on this engine so you play with it to see how it works for PDF files you have.
Disclaimer: I work for ByteScout
Yes, you can extract the texts, text style information, images, link annotations, bookmarks and even you can get the paragraph id information, except the tables. Check this link.
http://www.pdftron.com/pdfnet/index.html
It really works fine.
tika http://tika.apache.org/ Its advantage is to extract text from multi types. but it can solve your problem as well.
For the implementation: The goal of Tika is to reuse existing parser libraries like PDFBox or Apache POI as much as possible, and so most of the parser classes in Tika are adapters to such external libraries.
I think tika may work as you describe. Extract things with categeries. (Will add more code later.)
Not an exact answer yet.
精彩评论