开发者

Retrieving dimensions of image without download whole image

I'm using open-uri to download remote images and then the imagesize gem to get the dimensions. The problem is this gets painfully slow when more than a ha开发者_Go百科ndful of images needs to be processed.

How can I download enough information to know the dimensions for various image formats?

Are there any more ways to optimize this?


I believe if you go raw socket (issue bare bones http request), there's no need to download more than a few bytes (and abort the connection) to determine dimensions of images.

require 'uri'
require 'socket'
raise "Usage: url [bytes-to-read [output-filename]]" if ARGV.length < 1
uri   = URI.parse(ARGV.shift)
bytes = (ARGV.shift || 50).to_i
file  = ARGV.shift
$stderr.puts "Downloading #{bytes} bytes from #{uri.to_s}"
Socket.tcp(uri.host, uri.port) do |sock|
  # http request
  sock.print "GET #{uri.path} HTTP/1.0\r\nHost: #{uri.host}\r\n\r\n"
  sock.close_write
  # http response headers
  while sock.readline.chomp != ""; end
  # http response body, we need first N bytes
  if file
    open(file,"wb") {|f| f.write(sock.read(bytes)) }
  else
    puts sock.read(bytes)
  end
end

e.g. if i push the first 33 bytes of a PNG file (13 bytes for a GIF) into exiftool, it will give me the image size

$ ruby download_partial.rb http://yardoc.org/images/ss5.png 33 | exiftool - | grep ^Image
Downloading 33 bytes from http://yardoc.org/images/ss5.png
Image Width                     : 1000
Image Height                    : 300
Image Size                      : 1000x300


I'm not aware of any way to specify how many bytes to download with a normal HTTPd request. It's an all or nothing situation.

Some file types do allow sections of the files, but, you would have to have control of the server in order to enable that.

It's been a long time since I've played at this level, but, theoretically you could use a block with Net::HTTP or Open-URI, and count bytes until you've received the appropriate number to get to the image size block, then close the connection. Your TCP stack would probably not be too happy with you, especially if you were doing that a lot. If I remember right, it wouldn't dispose of the memory until the connection had timed out and would eat up available connections, either on your side or the server's. And, if I ran a site and found my server's performance being compromised by your app prematurely closing connections I'd ban you.

Ultimately, your best solution is to talk to whoever owns the site you are pillaging, and see if they have an API to tell you what the file sizes are. Their side of the connection can find that out a lot faster than your side since you have to retrieve the entire file. If nothing else, offer to write them something that can accomplish that. Maybe they'll understand that, by enabling it, you won't be consuming all their bandwidth retrieving images.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜