开发者

Python Raytracing

I'm building a simple Python raytracer with pure Python (just for the heck of it), but I've hit a roadblock.

The setup of my scene is currently this:

  1. Camera located at 0, -10, 0 pointing along the y-axis.
  2. Sphere with radius 1 located at 0, 0, 0.
  3. Imaging plane-thing is a distance of 1 away from the camera and has a width and height of 0.5.

I'm shooting photons in a uniformly randomly distribution through the imaging plane, and if a photon happens to intersect an object, I draw a red dot on the image canvas corresponding to the point on the image plane through which the ray passed.

My intersection code (I only have spheres):

def intersection(self, ray):
  cp = self.pos - ray.origin
  v = cp.dot(ray.direction)
  discriminant = self.radius**2  - cp.dot(cp) + v * v

  if discriminant < 0:
    return False
  else:
    return ray.position(v - sqrt(discriminant)) # Position of ray at time t

And my rendering code (it renders a certain number of photons, not pixel-by-pixel):

def bake(self, rays):
  self.image = Image.new('RGB', [int(self.camera.focalplane.width * 800), int(self.camera.focalplane.height * 800)])
  canvas = ImageDraw.Draw(self.image)

  for i in xrange(rays):
    x = random.uniform(-camera.focalplane.width / 2.0, camera.focalplane.width / 2.0)
    z = random.uniform(-camera.focalplane.height / 2.0, camera.focalplane.height / 2.0)

    ray = Ray(camera.pos, Vector(x, 1, z))

    for name in scene.objects.keys():
      result = scene.objects[name].intersection开发者_运维问答(ray)

      if result:
        n = Vector(0, 1, 0)
        d = ((ray.origin - Point(self.camera.pos.x, self.camera.pos.y + self.camera.focalplane.offset, self.camera.pos.z)).dot(n)) / (ray.direction.dot(n))
        pos = ray.position(d)

        x = pos.x
        y = pos.y

        canvas.point([int(self.camera.focalplane.width * 800) * (self.camera.focalplane.width / 2 + x) / self.camera.focalplane.width,
                      int(self.camera.focalplane.height * 800) * (self.camera.focalplane.height / 2 + z) / self.camera.focalplane.height],
                      fill = 128)

It should work properly, but when I render a test image, I get nothing that looks like the outline of a sphere:

Python Raytracing

I was expecting something like this:

Python Raytracing

Does anybody know why my code isn't functioning properly? I've been tweaking and rewriting this one part for way too long...


Are you normalising the ray's direction vector?

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜