dreamdancer 0 #1 October 8, 2009 a peek into the future? QuoteHOW can image sensors - the most complicated and expensive part of a digital camera - be made cheaper and less complex? Easy: take the lid off a memory chip and use that instead. As simple as it sounds, that pretty much sums up a device being developed by a team led by Edoardo Charbon, an engineer at the Swiss Federal Polytechnic Institute (EPFL) in Lausanne. In a paper presented at an imaging conference in Kyoto, Japan, this week, the team say that their so-called "gigavision" sensor will pave the way for cellphones and other inexpensive gadgets that take richer, more pleasing pictures than today's devices. Crucially, Charbon says the device performs better in both very bright light and dim light - conditions which regular digital cameras struggle to cope with. While Charbon's idea is new and has a patent pending, the principle behind it is not. It has long been known that memory chips are extremely sensitive to light: remove their black plastic packages to let in light, and the onrush of photons energises electrons, creating a current in each memory cell that overwhelms the tiny stored charge that might have represented digital information. "Light simply destroys the information," says Martin Vetterli, a member of the EPFL team. A similar effect occurs aboard spacecraft: when energetic cosmic rays hit a cell in an unprotected memory chip they can "flip" the state of the cell, corrupting the data stored in the chip. What Charbon and his team have found is that when they carefully focus light arriving on an exposed memory chip, the charge stored in every cell corresponds to whether that cell is in a light or dark area. The chip is in effect storing a digital image. All very clever, you might say, but why would anyone want to do that? The answer is that the two types of sensor chips used in today's digital cameras store the brightness of each pixel as an analogue signal. To translate this into a form that can be stored digitally, they need complex, bulky, noise-inducing circuitry. http://www.newscientist.com/article/mg20427295.100-cheap-naked-chips-snap-a-perfect-picture.htmlstay away from moving propellers - they bite blue skies from thai sky adventures good solid response-provoking keyboarding Quote Share this post Link to post Share on other sites
DSE 5 #2 October 8, 2009 this topic has had several forums around the web buzzing, it's kind of exciting if it can truly come to fruition. Quote Share this post Link to post Share on other sites
billvon 3,118 #3 October 8, 2009 I first saw this technique used in 1982, when the (small) DRAM's of the day could be used as image sensors. It's not new. Quote Share this post Link to post Share on other sites
DSE 5 #4 October 8, 2009 Bill, Was that the Micron-I system? There was a company in Idaho doing something with DRAM, but it seems they went away pretty quick. I've never seen this in action, but it sure has generated a lot of buzz lately. Quote Share this post Link to post Share on other sites
billvon 3,118 #5 October 8, 2009 >Was that the Micron-I system? No, this was actually an article in Popular Electronics back in the 1980's. It described a design (both HW and SW) to do this, and showed some results. They looked very good. Of course, this was black and white since the DRAM didn't have different spectral sensitivities per pixel. Quote Share this post Link to post Share on other sites
antonija 0 #6 October 9, 2009 Quoteimage sensors - the most complicated and expensive part of a digital camera There might be something wrong with this claim.I understand the need for conformity. Without a concise set of rules to follow we would probably all have to resort to common sense. -David Thorne Quote Share this post Link to post Share on other sites
DSE 5 #7 October 9, 2009 QuoteQuoteimage sensors - the most complicated and expensive part of a digital camera There might be something wrong with this claim. If you look at only the imager itself, you're right; it's probably an inaccurate statement. However, camcorder designers don't need to redesign glass, they effectively pull it off a shelf. Encoders can only be what the spec allows. The imager, however, requires design and is ever changing. Look at the GoPro Hero as a related example. Everyone who has used one sees the black dot generated by the sun. The imager causes this. They've engineered it out of the new HD product (supposedly). Add the cost of design, engineering, tooling into the cost of the new chip, and you've very likely got the most expensive part of the camcorder. Quote Share this post Link to post Share on other sites
antonija 0 #8 October 9, 2009 Yeah, but if you just swap current chip with new cheaper one (and we know chips themselves are cheap already) you still need to redesign just about every piece of electronics (and maybe software) to actually get that picture to human format. It might even make it more expensive because of "cost of design, engineering, tooling into the cost of the new chip" you mentioned. I hope we'll see something better, cheaper and more robust but quite honestly I'm not holding my breath.I understand the need for conformity. Without a concise set of rules to follow we would probably all have to resort to common sense. -David Thorne Quote Share this post Link to post Share on other sites
DSE 5 #9 October 9, 2009 Quoteeah, but if you just swap current chip with new cheaper one (and we know chips themselves are cheap already) you still need to redesign just about every piece of electronics (and maybe software) to actually get that picture to human format. Could be, but not likely. Sony, Canon, and Panasonic haven't changed much at all in the way of the chips they use. Panasonic has, and still uses stock "Off the shelf" imagers, Sony makes their own and supplies Fuji, Canon (video only) and many other companies with their OEM imagers. It also depends on whether they're using CCD or CMOS. Lotsa variables, but given that the manufacturers haven't had anything drastic in the way of imager technology for a while... Quote Share this post Link to post Share on other sites
antonija 0 #10 October 9, 2009 Quotebut given that the manufacturers haven't had anything drastic in the way of imager technology for a while... ..one would assume that current technology (silica based chips) might be nearing it's physical limits. Like CPUs have reached few years ago and we see a stream of multi-core CPUs today.I understand the need for conformity. Without a concise set of rules to follow we would probably all have to resort to common sense. -David Thorne Quote Share this post Link to post Share on other sites
DSE 5 #11 October 9, 2009 Quote..one would assume that current technology (silica based chips) might be nearing it's physical limits. CCD, yes. CMOS, no. Not even close. With individually addressable pixels, mapping, voltage per...CMOS is just getting started, in many ways, even though it's as old a tech as CCD. Exmor was just the next step in CMOS, and the upcoming CMOS products are pretty cool. The most advanced cameras in the world have gone to CMOS. At 65mm in size as a "mid" point, we're just seeing the tip of this iceberg. Next we'll see global shutters combined with CMOS as processors that can manage the chip output come into their own. CMOS tech can go far beyond the CPU ability to manage the stream. Or, it can be used for devices as simple as your cell phone or grocery scanner (although grocery scanners are rapidly converting to CCD due to cost). Quote Share this post Link to post Share on other sites