Image sensing apparatus and method of controlling the apparatus

Abstract

This invention makes it possible to provide a technique for suppressing a decrease in resolution of a sensed image even when an image sensor on which solid-state image sensing elements with different sensitivities are arranged is used. A demosaic unit obtains a color component of a given pixel, sensed at the first sensitivity, by performing interpolation calculation using color components of pixels each of which is adjacent to the given pixel and is sensed at the first sensitivity. The demosaic unit also obtains a color component of a given pixel, sensed at the second sensitivity, by performing interpolation calculation using color components of pixels each of which is adjacent to the given pixel and is sensed at the second sensitivity.

Claims

1 . An image sensing apparatus including an image sensor on which solid-state image sensing elements which sense color components at a first sensitivity and solid-state image sensing elements which sense color components at a second sensitivity higher than the first sensitivity are alternately, two-dimensionally arranged, comprising: a first calculation unit that obtains a color component of a given pixel, sensed at the first sensitivity, by performing interpolation calculation using color components of pixels each of which is adjacent to the given pixel and is sensed at the first sensitivity, in a color image based on an image signal output from the image sensor; a second calculation unit that obtains a color component of a given pixel, sensed at the second sensitivity, by performing interpolation calculation using color components of pixels each of which is adjacent to the given pixel and is sensed at the second sensitivity, in the color image output from the image sensor; and a unit that outputs the color image in which color components of all pixels are determined by said first calculation unit and said second calculation unit. 2 . The image sensing apparatus according to claim 1 , wherein the solid-state image sensing elements which sense the color components at the first sensitivity include solid-state image sensing elements DR which sense R components at a first brightness, solid-state image sensing elements DG which sense G components at the first brightness, and solid-state image sensing elements DB which sense B components at the first brightness, the solid-state image sensing elements which sense the color components at the second sensitivity include solid-state image sensing elements LR which sense R components at a second brightness higher than the first brightness, solid-state image sensing elements LG which sense G components at the second brightness, and solid-state image sensing elements LB which sense B components at the second brightness, and on the image sensor, a column of solid-state image sensing elements formed from the solid-state image sensing elements DG and the solid-state image sensing elements LG is arranged for every other column, and a ratio between the numbers of solid-state image sensing elements DR, solid-state image sensing elements DG, and solid-state image sensing elements DB is 1:2:1, and a ratio between the numbers of solid-state image sensing elements LR, solid-state image sensing elements LG, and solid-state image sensing elements LB is 1:2:1. 3 . The image sensing apparatus according to claim 1 , wherein said first calculation unit determines a pixel value, in the color image, of a pixel sensed by the solid-state image sensing element DG as a G component of this pixel, that has the first brightness, and said second calculation unit determines a pixel value, in the color image, of a pixel sensed by the solid-state image sensing element LG as a G component of this pixel, that has the second brightness. 4 . The image sensing apparatus according to claim 1 , wherein letting (i,j) be a pixel position of a pixel sensed by the solid-state image sensing element DG, said first calculation unit obtains a G component, that has the first brightness, of a pixel Q at a pixel position (i+1,j+1) by performing interpolation calculation using pixel values of pixels adjacent to the pixel Q in the color image. 5 . The image sensing apparatus according to claim 1 , wherein letting (i,j) be a pixel position of a pixel sensed by the solid-state image sensing element LG, said second calculation unit obtains a G component, that has the second brightness, of a pixel Q at a pixel position (i+1,j+1) by performing interpolation calculation using pixel values of pixels adjacent to the pixel Q in the color image. 6 . The image sensing apparatus according to claim 1 , wherein when a pixel position of a pixel sensed by the solid-state image sensing element DG is one of (i−1,j) and (i,j−1), said first calculation unit obtains a G component, that has the first brightness, of a pixel Q at a pixel position (i,j) by performing interpolation calculation using pixel values of two pixels adjacent to the pixel Q in the color image. 7 . The image sensing apparatus according to claim 1 , wherein when a pixel position of a pixel sensed by the solid-state image sensing element LG is one of (i−1,j) and (i,j−1), said second calculation unit obtains a G component, that has the second brightness, of a pixel Q at a pixel position (i,j) by performing interpolation calculation using pixel values of two pixels adjacent to the pixel Q in the color image. 8 . The image sensing apparatus according to claim 1 , wherein said first calculation unit obtains an R component, having the first brightness, of a pixel, sensed by the solid-state image sensing element DG, by performing interpolation calculation using an R component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element DG, and obtains a B component, having the first brightness, of the pixel, sensed by the solid-state image sensing element DG, by performing interpolation calculation using a B component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element DG, and said second calculation unit obtains an R component, having the second brightness, of the pixel, sensed by the solid-state image sensing element DG, by performing interpolation calculation using an R component and a G component, both having the second brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element DG, and obtains a B component, having the second brightness, of the pixel, sensed by the solid-state image sensing element DG, by performing interpolation calculation using a B component and a G component, both having the second brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element DG. 9 . The image sensing apparatus according to claim 1 , wherein said first calculation unit obtains an R component, having the first brightness, of a pixel, sensed by the solid-state image sensing element LR, by performing interpolation calculation using an R component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element LR, and obtains a B component, having the first brightness, of the pixel, sensed by the solid-state image sensing element LR, by performing interpolation calculation using a B component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element LR, and said second calculation unit determines an R component, having the second brightness, of the pixel sensed by the solid-state image sensing element LR as a pixel value, in the color image, of the pixel sensed by the solid-state image sensing element DG, and obtains a B component, having the second brightness, of the pixel, sensed by the solid-state image sensing element LR, by performing interpolation calculation using a B component and a G component, both having the second brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element LR. 10 . The image sensing apparatus according to claim 1 , wherein said first calculation unit obtains an R component, having the first brightness, of a pixel, sensed by the solid-state image sensing element LB, by performing interpolation calculation using an R component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element LB, and obtains a B component, having the first brightness, of the pixel, sensed by the solid-state image sensing element LB, by performing interpolation calculation using a B component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element LB, and said second calculation unit obtains an R component, having the second brightness, of the pixel, sensed by the solid-state image sensing element LB, by performing interpolation calculation using an R component and a G component, both having the second brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element LB, and determines a B component, having the second brightness, of the pixel sensed by the solid-state image sensing element LB as a pixel value, in the color image, of the pixel sensed by the solid-state image sensing element LB. 11 . The image sensing device according to claim 1 , wherein said first calculation unit obtains an R component, having the first brightness, of a pixel, sensed by the solid-state image sensing element LG, by performing interpolation calculation using an R component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element LG, and obtains a B component, having the first brightness, of the pixel, sensed by the solid-state image sensing element LG, by performing interpolation calculation using a B component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element LG, and said second calculation unit obtains an R component, having the second brightness, of the pixel, sensed by the solid-state image sensing element LG, by performing interpolation calculation using an R component and a G component, both having the second brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element LG, and obtains a B component, having the second brightness, of the pixel, sensed by the solid-state image sensing element LG, by performing interpolation calculation using a B component and a G component, both having the second brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element LG. 12 . The image sensing device according to claim 1 , wherein said first calculation unit determines an R component, having the first brightness, of a pixel sensed by the solid-state image sensing element DR as a pixel value, in the color image, of the pixel sensed by the solid-state image sensing element DR, and obtains a B component, having the first brightness, of the pixel, sensed by the solid-state image sensing element DR, by performing interpolation calculation using a B component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element DR, and said second calculation unit obtains an R component, having the second brightness, of the pixel, sensed by the solid-state image sensing element DR, by performing interpolation calculation using an R component and a G component, both having the second brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element DR, and determines a B component, having the second brightness, of the pixel sensed by the solid-state image sensing element DR as a pixel value, in the color image, of the pixel sensed by the solid-state image sensing element DR. 13 . The image sensing device according to claim 1 , wherein said first calculation unit obtains an R component, having the first brightness, of a pixel, sensed by the solid-state image sensing element DB, by performing interpolation calculation using an R component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element DB, and determines a B component, having the first brightness, of the pixel sensed by the solid-state image sensing element DB as a pixel value, in the color image, of the pixel sensed by the solid-state image sensing element DB, and said second calculation unit obtains an R component, having the second brightness, of the pixel, sensed by the solid-state image sensing element DB, by performing interpolation calculation using an R component and a G component, both having the second brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element DB, and obtains a B component, having the second brightness, of the pixel, sensed by the solid-state image sensing element DB, by performing interpolation calculation using a B component and a G component, both having the second brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element DB. 14 . The image sensing apparatus according to claim 1 , further comprising: a unit that determines, for each pixel in the color image based on the image signal output from the image sensor, whether a pixel value is saturated, wherein said first calculation unit and said second calculation unit obtain color components of a pixel for which it is determined that the pixel value is unsaturated. 15 . A method of controlling an image sensing apparatus including an image sensor on which solid-state image sensing elements which sense color components at a first sensitivity and solid-state image sensing elements which sense color components at a second sensitivity higher than the first sensitivity are alternately, two-dimensionally arranged, comprising: a first calculation step of obtaining a color component of a given pixel, sensed at the first sensitivity, by performing interpolation calculation using color components of pixels each of which is adjacent to the given pixel and is sensed at the first sensitivity, in a color image based on an image signal output from the image sensor; a second calculation step of obtaining a color component of a given pixel, sensed at the second sensitivity, by performing interpolation calculation using color components of pixels each of which is adjacent to the given pixel and is sensed at the second sensitivity, in the color image output from the image sensor; and a step of outputting the color image in which color components of all pixels are determined in the first calculation step and the second calculation step.
TECHNICAL FIELD [0001] The present invention relates to a single-plate HDR image sensing technique. BACKGROUND ART [0002] The dynamic range can be widened by imparting different sensitivities to adjacent pixels and synthesizing a signal of a high-sensitivity pixel and a signal of a low-sensitivity pixel. For example, PTL1 discloses a color filter array in which all colors: light R, G, B, and W and dark r, g, b, and w are arranged on all rows and columns. Also, PTL2 discloses a sensor on which RGB rows and W rows are alternately arranged. [0003] However, in the technique disclosed in PTL1, all pixels are provided at the same ratio, so the sampling interval of G (Green), for example, is every two pixels. Thus, only a resolution becomes half that of a normal Bayer array. Also, when G of a high-sensitivity pixel is saturated, the sampling interval of G becomes every four pixels. Thus, a resolution becomes one quarter of that of the Bayer array. [0004] In the technique disclosed in PTL2, only luminance information is used for a low-sensitivity pixel, so color information cannot be held for a high-luminance portion. Also, the resolution in the vertical direction halves. CITATION LIST Patent Literature [0005] PTL1: Japanese Patent Laid-Open No. 2006-253876 [0006] PTL2: Japanese Patent Laid-Open No. 2007-258686 SUMMARY OF INVENTION Technical Problem [0007] As described above, when pixels with different sensitivities are arranged on the same sensor, the resolution inevitably decreases. Also, when a high-sensitivity pixel is saturated, the resolution further decreases. [0008] The present invention has been made in consideration of the above-mentioned problem, and has as its object to provide a technique for suppressing a decrease in resolution of a sensed image even when an image sensor on which solid-state image sensing elements with different sensitivities are arranged is used. Solution to Problem [0009] In order to achieve the object of the present invention, an image sensing apparatus according to the present invention has, for example, the following arrangement. That is, there is provided an image sensing apparatus including an image sensor on which solid-state image sensing elements which sense color components at a first sensitivity and solid-state image sensing elements which sense color components at a second sensitivity higher than the first sensitivity are alternately, two-dimensionally arranged, comprising a first calculation unit that obtains a color component of a given pixel, sensed at the first sensitivity, by performing interpolation calculation using color components of pixels each of which is adjacent to the given pixel and is sensed at the first sensitivity, in a color image based on an image signal output from the image sensor, a second calculation unit that obtains a color component of a given pixel, sensed at the second sensitivity, by performing interpolation calculation using color components of pixels each of which is adjacent to the given pixel and is sensed at the second sensitivity, in the color image output from the image sensor, and a unit that outputs the color image in which color components of all pixels are determined by the first calculation unit and the second calculation unit. Advantageous Effects of Invention [0010] With the arrangement according to the present invention, it is possible to suppress a decrease in resolution of a sensed image even when an image sensor on which solid-state image sensing elements with different sensitivities are arranged is used. [0011] Other features and advantages of the present invention will be apparent from the following descriptions taken in conjunction with the accompanying drawings. Noted that the same reference characters denote the same or similar parts throughout the figures thereof. BRIEF DESCRIPTION OF DRAWINGS [0012] The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention. [0013] FIG. 1 is a block diagram illustrating an example of the functional configuration of an image sensing apparatus according to the first embodiment; [0014] FIG. 2 is a view illustrating an example of the arrangement of solid-state image sensing elements on an image sensor 103 ; [0015] FIG. 3A is a flowchart of processing for obtaining the DG value of each pixel; [0016] FIG. 3B is a flowchart of the processing for obtaining the DG value of each pixel; [0017] FIG. 4A is a flowchart of processing for obtaining the DR and RB values of each pixel; [0018] FIG. 4B is a flowchart of the processing for obtaining the DR and RB values of each pixel; [0019] FIG. 5 is a block diagram illustrating an example of the configuration of a demosaic unit 109 according to the second embodiment; [0020] FIG. 6A is a flowchart of processing for obtaining the DG value of each pixel; [0021] FIG. 6B is a flowchart of the processing for obtaining the DG value of each pixel; [0022] FIG. 7 is a flowchart of processing performed by a first interpolation unit 502 ; [0023] FIG. 8A is a flowchart of processing for obtaining the DG value of each pixel; [0024] FIG. 8B is a flowchart of the processing for obtaining the DG value of each pixel; [0025] FIG. 9 is a view illustrating an example of the arrangement of solid-state image sensing elements on an image sensor 103 ; [0026] FIG. 10A is a flowchart of processing for obtaining the DR and DB values of each pixel which constitutes a color image; and [0027] FIG. 10B is a flowchart of the processing for obtaining the DR and DB values of each pixel which constitutes a color image. DESCRIPTION OF EMBODIMENTS [0028] Preferred embodiments of the present invention will be described below with reference to the accompanying drawings. Note that the embodiments to be described hereinafter merely exemplify a case in which the present invention is actually practiced, and are practical embodiments of the arrangement defined in claims. First Embodiment [0029] An example of the functional configuration of an image sensing apparatus according to this embodiment will be described first with reference to a block diagram shown in FIG. 1 . Because light in the external world in which an object 90 is present enters an image sensor 103 via an optical system 101 , the image sensor 103 accumulates charges corresponding to the light incident on it. The image sensor 103 outputs an analog image signal corresponding to the accumulated charges to an A/D conversion unit 104 in a subsequent stage. [0030] The A/D conversion unit 104 converts the analog image signal input from the image sensor 103 into a digital image signal, and outputs the converted digital image signal to a signal processing unit 105 and a media I/F 107 in subsequent stages. [0031] The signal processing unit 105 performs various types of image processing (to be described later) for a color image represented by the digital image signal input from the A/D conversion unit 104 . The signal processing unit 105 outputs the color image having undergone the various types of image processing to a display unit 106 and the media I/F 107 in subsequent stages. [0032] The image sensor 103 will be described next. As shown in FIG. 2 , the image sensor 103 includes solid-state image sensing elements DR, DG, and DB which are two-dimensionally arranged on it. The solid-state image sensing elements DR are used to sense R components at a first sensitivity. The solid-state image sensing elements DG are used to sense G components at the first sensitivity. The solid-state image sensing elements DB are used to sense B components at the first sensitivity. Note that “DR” indicates dark red (red with a first brightness), “DG” indicates dark green (green with the first brightness), and “DB” indicates dark blue (blue with the first brightness). The image sensor 103 also includes solid-state image sensing elements LR, LG, and LB which are two-dimensionally arranged on it. The solid-state image sensing elements LR are used to sense R components at a second sensitivity higher than the first sensitivity. The solid-state image sensing elements LG are used to sense G components at the second sensitivity. The solid-state image sensing elements LB are used to sense B components at the second sensitivity. Note that “LR” indicates light red (red with a second brightness higher than the first brightness), “LG” indicates light green (green with the second brightness), and “LB” indicates light blue (blue with the second brightness). [0033] The layout pattern of these solid-state image sensing elements arranged on the image sensor 103 will be described in more detail herein. As shown in FIG. 2 , two-dimensional arrays each including 4×4 solid-state image sensing elements formed from the solid-state image sensing elements DR, DG, DB, LR, LG, and LB are repeatedly arranged on the image sensor 103 without overlapping. In one two-dimensional array of solid-state image sensing elements, the ratio between the numbers of solid-state image sensing elements DR, DG, and DB is 1:2:1. Again in this array, the ratio between the numbers of solid-state image sensing elements LR, LG, and LB is 1:2:1. Moreover, a column of solid-state image sensing elements (or a row of solid-state image sensing elements) formed from the solid-state image sensing elements DG and LG is arranged on the image sensor 103 for every other column (row). The solid-state image sensing elements LR and LB can be interchanged with each other, and the solid-state image sensing elements DR and DB can similarly be interchanged with each other. [0034] In this manner, the image sensing apparatus according to this embodiment includes an image sensor on which solid-state image sensing elements for sensing color components at a first sensitivity and solid-state image sensing elements for sensing color components at a second sensitivity higher than the first sensitivity are alternately, two-dimensionally arranged. [0035] The signal processing unit 105 will be described next. A camera control unit 108 performs AE/AF/AWB control. A demosaic unit 109 generates an HDR image by performing interpolation processing for pixels sensed at the first sensitivity and interpolation processing for pixels sensed at the second sensitivity, in a color image represented by the digital image signal input from the A/D conversion unit 104 . [0036] A color processing unit 111 performs various types of color processing such as color balance processing, γ processing, sharpness processing, and noise reduction processing for the HDR image. The color processing unit 111 outputs the HDR image having undergone the various types of color processing to the display unit 106 and media I/F 107 in subsequent stages. [0037] Processing performed by the demosaic unit 109 to obtain the DG value of each pixel in the color image will be described next with reference to FIGS. 3A and 3 B showing flowcharts of this processing. Note that processing performed by the demosaic unit 109 to obtain the LG value of each pixel which constitutes the color image is processing in which “DG” is substituted by “LG” in the flowcharts shown in FIGS. 3A and 3B . [0038] In step S 301 , the demosaic unit 109 secures a memory area, used to perform processing to be described later, in a memory which is provided in itself or managed by it, and initializes both the variables i and j indicating the pixel position in the above-mentioned color image to zero. Note that the variable i indicates the x-coordinate value in the color image, and the variable j indicates the y-coordinate value in the color image. Note also that the position of the upper left corner in the color image is defined as an origin (i,j)=(0,0). The setting of a coordinate system defined in the color image is not limited to this, as a matter of course. [0039] In step S 302 , the demosaic unit 109 reads out, from the above-mentioned memory, map information (filter array) indicating which of solid-state image sensing elements DR, DG, DB, LR, LG, and LB is placed at each position on the image sensor 103 , as shown in FIG. 2 . [0040] In step S 303 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element DG. This amounts to determining whether the pixel at the pixel position (i,j) in the color image is sensed by the solid-state image sensing element DG. This determination can be done in accordance with whether the solid-state image sensing element at the position (i,j) on the image sensor 103 is the solid-state image sensing element DG, upon defining the position of the upper left corner on the image sensor 103 as an origin. [0041] If it is determined in step S 303 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element DG, the process advances to step S 309 ; otherwise, the process advances to step S 304 . [0042] In step S 304 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i−1,j−1) in the color image, is the solid-state image sensing element DG. This determination is done in the same way as in step S 303 . [0043] If it is determined in step S 304 that the solid-state image sensing element corresponding to the pixel position (i−1,j−1) is the solid-state image sensing element DG, the process advances to step S 305 ; otherwise, the process advances to step S 310 . [0044] In step S 305 , the demosaic unit 109 calculates equations presented in mathematical 1. This yields a variation evaluation value deff1 for five pixels juxtaposed from the upper left to the lower right with the pixel at the pixel position (i,j) as the center, and a variation evaluation value deff2 for five pixels juxtaposed from the upper right to the lower left with the pixel at the pixel position (i,j) as the center. Note that P(i,j) indicates the pixel value at the pixel position (i,j) in the color image. [0000] deff1=|2× P ( i,j )− P ( i− 2, j− 2)− P ( i+ 2, j+ 2)|+| P ( i− 1, j− 1)− P ( i+ 1, j+ 1)| [0000] deff2=|2× P ( i,j )− P ( i− 2, j+ 2)− P ( i+ 2, j− 2 )|+| P ( i− 1, j+ 1)− P ( i+ 1, j− 1)|  [Mathematical 1] [0045] In step S 306 , the demosaic unit 109 compares the variation evaluation values deff1 and deff2. If the comparison result shows deff1<deff2, the process advances to step S 307 ; or if this comparison result shows deff1≧deff2, the process advances to step S 308 . [0046] In step S 307 , the demosaic unit 109 performs interpolation calculation using the pixel values of pixels adjacent to the pixel at the pixel position (i,j) to obtain DG(i,j) indicating the DG value of the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 2. [0000] DG ( i,j )=|(2× P ( i,j )− P ( i− 2, j− 2)− P ( i+ 2, j+ 2))÷4|+( P ( i− 1, j− 1)+ P ( i+ 1, j+ 1))÷2   [Mathematical 2] [0047] This interpolation processing means one-dimensional lowpass filter processing, and the coefficient value for each pixel value in the equation presented in mathematical 2 corresponds to a filter coefficient. On the other hand, in step S 308 , the demosaic unit 109 performs interpolation calculation using the pixel values of pixels adjacent to the pixel at the pixel position (i,j) to obtain DG(i,j) indicating the DG value of the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 3. [0000] DG ( i,j )=|(2× P ( i,j )− P ( i− 2, j+ 2)− P ( i+ 2, j− 2))÷4|+( P ( i− 1, j+ 1)+ P ( i+ 1, j− 1))÷2   [Mathematical 3] [0048] In step S 309 , the demosaic unit 109 substitutes the pixel value P(i,j) for DG(i,j). In step S 310 , the demosaic unit 109 determines whether the value of the variable i is equal to “pel” (the total number of pixels in the x direction in the color image)−1. If it is determined that i=pel−1, the process advances to step S 311 ; or if it is determined that i≠pel−1, the value of the variable i is incremented by one and the processes in step S 303 and subsequent steps are repeated. [0049] In step S 311 , the demosaic unit 109 determines whether the value of the variable j is larger than “line” (the total number of pixels in the y direction in the color image)−1. If it is determined in step S 311 that j>line−1, the process advances to step S 312 ; or if it is determined in step S 311 that j≦line−1, the value of the variable i is initialized to zero, the value of the variable j is incremented by one, and the processes in step S 303 and subsequent steps are repeated. [0050] In step S 312 , the demosaic unit 109 initializes both the variables i and j to zero. In step S 313 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i−1,j) or (i,j−1) in the color image, is the solid-state image sensing element DG. This determination is done in the same way as in step S 303 . [0051] If it is determined in step S 313 that the solid-state image sensing element corresponding to the pixel position (i−1,j) or (i,j−1) is the solid-state image sensing element DG, the process advances to step S 314 ; otherwise, the process advances to step S 318 . [0052] In step S 314 , the demosaic unit 109 calculates equations presented in mathematical 4 to obtain a variation evaluation value deff3 for pixels which are adjacent to a pixel Q at the pixel position (i,j) vertically (in the y direction), and a variation evaluation value deff4 for pixels which are adjacent to the pixel Q horizontally (in the x direction). [0000] deff3=| P ( i,j− 1)− P ( i,j+ 1)| [0000] deff4=| P ( i− 1, j )− P ( i+ 1, j )|  [Mathematical 4] [0053] In step S 315 , the demosaic unit 109 compares the variation evaluation values deff3 and deff4. If the comparison result shows deff3<deff4, the process advances to step S 316 ; or if this comparison result shows deff3≧deff4, the process advances to step S 317 . [0054] In step S 316 , the demosaic unit 109 performs interpolation calculation using the pixel values of pixels adjacent to the pixel at the pixel position (i,j) to obtain DG(i,j) indicating the DG value of the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 5. [0000] DG ( i,j )=( P ( i,j− 1)+ P ( i,j+ 1))÷2   [Mathematical 5] [0055] On the other hand, in step S 317 , the demosaic unit 109 performs interpolation calculation using the pixel values of pixels adjacent to the pixel at the pixel position (i,j) to obtain DG(i,j) indicating the DG value of the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 6. [0000] DG ( i,j )=( P ( i− 1, j )+ P ( i+ 1, j ))÷2   [Mathematical 6] [0056] In step S 318 , the demosaic unit 109 determines whether the value of the variable i is equal to pel−1. If it is determined in step S 318 that i=pel−1, the process advances to step S 319 ; or if it is determined in step S 318 that i≠pel−1, the value of the variable i is incremented by one and the processes in step S 313 and subsequent steps are repeated. [0057] In step S 319 , the demosaic unit 109 determines whether the value of the variable j is larger than line−1. If it is determined in step S 319 that j>line−1, the process ends and a shift to processing according to flowcharts shown in FIGS. 4A and 4B is made. On the other hand, if it is determined in step S 319 that j≦line−1, the value of the variable i is initialized to zero, the value of the variable j is incremented by one, and the processes in step S 313 and subsequent steps are repeated. [0058] Processing performed by the demosaic unit 109 to obtain the DR and RB values of each pixel which constitutes the color image will be described next with reference to FIGS. 4A and 4B showing flowcharts of this processing. Note that processing performed by the demosaic unit 109 to obtain the LR and LB values of each pixel which constitutes the color image is processing in which “DR” is substituted by “LR” and “DB” is substituted by “LB” in the flowcharts shown in FIGS. 4A and 4B . Note also that the processing according to the flowcharts shown in FIGS. 4A and 4B follows the processing (the processing for DG and LG) according to the flowcharts shown in FIGS. 3A and 3B . [0059] First, in step S 401 , the demosaic unit 109 secures a memory area, used to perform processing to be described later, in a memory which is provided in itself or managed by it, and initializes both the variables i and j indicating the pixel position in the above-mentioned color image to zero. [0060] In step S 402 , the demosaic unit 109 reads out, from the above-mentioned memory, map information (filter array) indicating which of solid-state image sensing elements DR, DG, DB, LR, LG, and LB is placed at each position on the image sensor 103 , as shown in FIG. 2 . [0061] In step S 403 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element DG. This determination is done in the same way as in step S 303 . [0062] If it is determined in step S 403 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element DG, the process advances to step S 404 ; otherwise, the process advances to step S 407 . [0063] In step S 404 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i−1,j) in the color image, is the solid-state image sensing element LB. This determination is done in the same way as in step S 303 . [0064] If it is determined in step S 404 that the solid-state image sensing element corresponding to the pixel position (i−1,j) is the solid-state image sensing element LB, the process advances to step S 405 ; otherwise, the process advances to step S 406 . [0065] In step S 405 , the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 7. This yields DR(i,j) indicating the DR value of the pixel at the pixel position (i,j), DB(i,j) indicating the DB value of this pixel, LR(i,j) indicating the LR value of this pixel, and LB(i,j) indicating the LB value of this pixel. [0000] DR ( i,j )=( DR ( i− 1, j− 1)− DG ( i− 1, j− 1)+ DR ( i+ 1, j+ 1)− DG ( i+ 1, j+ 1))÷2 [0000] DB ( i,j )=( DB ( i− 1, j+ 1)− DG ( i− 1, j+ 1)+ DB ( i+ 1, j− 1)− DG ( i+ 1, j− 1))÷2 [0000] LR ( i,j )=( LR ( i+ 1, j )− LG ( i+ 1, j ))÷2+( LR ( i− 1, j+ 2)− LG ( i− 1, j+ 2)+ LR ( i− 1, j− 2)− LG ( i− 1, j− 2))÷4 [0000] LB ( i,j )=( LB ( i− 1, j )− LG ( i− 1, j ))÷2+( LB ( i+ 1, j+ 2)− LG ( i+ 1, j+ 2)+ LB ( i+ 1, j− 2)− LG ( i+ 1, j− 2))÷4   [Mathematical 7] [0066] In step S 406 , the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 8. This yields DR(i,j), DB(i,j), LR(i,j), and LB(i,j) of the pixel at the pixel position (i,j). [0000] DR ( i,j )=( DR ( i− 1, j+ 1)− DG ( i− 1, j+ 1)+ DR ( i+ 1, j+ 1)− DG ( i+ 1, j+ 1))÷2 [0000] DB ( i,j )=( DB ( i− 1, j− 1)− DG ( i− 1, j− 1)+ DB ( i+ 1, j+ 1)− DG ( i+ 1, j+ 1))÷2 [0000] LR ( i,j )=( LR ( i− 1, j )− LG ( i− 1, j ))÷2+( LR ( i+ 1, j+ 2)− LG ( i+ 1, j+ 2)+ LR ( i+ 1, j− 2)− LG ( i+ 1, j− 2))÷4 [0000] LB ( i,j )=( LB ( i+ 1, j )− LG ( i+ 1, j ))÷2+( LB ( i− 1, j+ 2)− LG ( i− 1, j+ 2)+ LB ( i− 1, j− 2)− LG ( i− 1, j− 2))÷4   [Mathematical 8] [0067] In step S 407 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element LR. This determination is done in the same way as in step S 303 . [0068] If it is determined in step S 407 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element LR, the process advances to step S 408 ; otherwise, the process advances to step S 409 . [0069] In step S 408 , the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 9. This yields DR(i,j), DB(i,j), LR(i,j), and LB(i,j) of the pixel at the pixel position (i,j). [0000] DR ( i,j )=( DR ( i,j+ 1)− DG ( i,j+ 1))÷2+( DR ( i− 2, j− 1)− DG ( i− 2, j− 1)+ DR ( i+ 2, j− 1)− DG ( i+ 2, j− 1))÷4 [0000] DB ( i,j )=( DB ( i,j− 1)− DG ( i,j− 1))÷2+( DB ( i− 2, j+ 1)− DG ( i− 2, j+ 1)+ DB ( i+ 2, j+ 1)− DG ( i+ 2, j+ 1))÷4 [0000] LR ( i,j )= LR ( i,j ) [0000] LB ( i,j )=( LB ( i− 2, j )− LG ( i− 2, j ))÷4+( LB ( i+ 2, j )− LG ( i+ 2, j ))÷4+( LB ( i,j− 2)− LR ( i,j− 2))÷4+( LB ( i,j+ 2)− LG ( i,j+ 2))÷4   [Mathematical 9] [0070] In step S 409 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element LB. This determination is done in the same way as in step S 303 . [0071] If it is determined in step S 409 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element LB, the process advances to step S 410 ; otherwise, the process advances to step S 411 . [0072] In step S 410 , the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 10. This yields DR(i,j), DB(i,j), LR(i,j), and LB(i,j) of the pixel at the pixel position (i,j). [0000] DR ( i,j )=( DR ( i,j− 1)− DG ( i,j− 1))÷2+( DR ( i− 2, j+ 1)− DG ( i− 2, j+ 1)+ DR ( i+ 2, j+ 1)− DG ( i+ 2, j+ 1))÷4 [0000] DB ( i,j )=( DB ( i,j+ 1)− DG ( i,j+ 1))÷2+( DB ( i− 2, j− 1)− DG ( i− 2, j− 1)+ DB ( i+ 2, j− 1)− DG ( i+ 2, j− 1))÷4 [0000] LR ( i,j )=( LR ( i− 2, j )− LG ( i− 2, j ))÷4+( LR ( i+ 2, j )− LG ( i+ 2, j ))÷4+( LR ( i,j− 2)− LR ( i,j− 2))÷4+( LR ( i,j+ 2)− LG ( i,j+ 2))÷4 [0000] LB ( i,j )= LB ( i,j )   [Mathematical 10] [0073] In step S 411 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element LG. This determination is done in the same way as in step S 303 . [0074] If it is determined in step S 411 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element LG, the process advances to step S 412 ; otherwise, the process advances to step S 415 . [0075] In step S 412 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i−1,j) in the color image, is the solid-state image sensing element DR. This determination is done in the same way as in step S 303 . [0076] If it is determined in step S 412 that the solid-state image sensing element corresponding to the pixel position (i−1,j) is the solid-state image sensing element DR, the process advances to step S 413 ; otherwise, the process advances to step S 414 . [0077] In step S 413 , the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 11. This yields DR(i,j), DB(i,j), LR(i,j), and LB(i,j) of the pixel at the pixel position (i,j). [0000] DR ( i,j )=( DR ( i− 1, j )− DG ( i− 1, j ))÷2+( DR ( i+ 1, j+ 2)− DG ( i+ 1, j+ 2)+ DR (i+1, j− 2)− DG ( i+ 1, j− 2))+4 [0000] DB ( i,j )=( DB ( i+ 1, j )− DG ( i+ 1, j ))÷2+( DB ( i− 1, j+ 2)− DG ( i− 1, j+ 2)+ DB ( i− 1, j− 2)− DG ( i− 1, j− 2))÷4 [0000] LR ( i,j )=( LR ( i− 1, j+ 1)− LG ( i− 1, j+ 1)+ LR ( i− 1, j+ 1)− LG ( i− 1, j+ 1))÷2 [0000] LB ( i,j )=( LB ( i− 1, j− 1)− LG ( i− 1, j− 1)+ LB ( i+ 1, j+ 1)− LG ( i+ 1, j+ 1))=2   [Mathematical 11] [0078] On the other hand, in step S 414 , the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 12. This yields DR(i,j), DB(i,j), LR(i,j), and LB(i,j) of the pixel at the pixel position (i,j). [0000] DR ( i,j )=( DR ( i+ 1, j )− DG ( i+ 1, j ))÷2+( DR ( i− 1, j+ 2)− DG ( i− 1, j+ 2)+ DR ( i− 1, j− 2)− DG ( i− 1, j− 2))÷4 [0000] DB ( i,j )=( DB ( i− 1, j )− DG ( i− 1, j ))÷2+( DB ( i+ 1, j+ 2)− DG ( i+ 1, j+ 2)+ DB ( i+ 1, j− 2)− DG ( i+ 1, j− 2))÷4 [0000] LR ( i,j )=( LR ( i− 1, j− 1)− LG ( i− 1, j− 1)+ LR ( i+ 1, j+ 1)− LG ( i+ 1, j+ 1))÷2 [0000] LB ( i,j )=( LB ( i− 1, j+ 1)− LG ( i− 1, j+ 1)+ LB ( i− 1, j+ 1)− LG ( i− 1, j+ 1))÷2   [Mathematical 12] [0079] In step S 415 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element DR. This determination is done in the same way as in step S 303 . [0080] If it is determined in step S 415 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element DR, the process advances to step S 416 ; otherwise, that is, if it is determined in step S 415 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element DB, the process advances to step S 417 . [0081] In step S 416 , the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 13. This yields DR(i,j), DB(i,j), LR(i,j), and LB(i,j) of the pixel at the pixel position (i,j). [0000] DR ( i,j )= DR ( i,j ) [0000] DB ( i,j )=( DB ( i− 2, j )− DG ( i− 2, j ))÷4+( DB ( i+ 2, j )− DG ( i+ 2, j ))÷4+( DB ( i,j− 2)− DR ( i,j− 2))÷4+( DB ( i,j+ 2)− DG ( i,j+ 2))÷4 [0000] LR ( i,j )=( LR ( i,j− 1 )− LG ( i,j− 1))÷2+( LR ( i− 2, j+ 1)− LG ( i− 2, j+ 1)+ LR ( i+ 2, j+ 1)− LG ( i+ 2, j+ 1))÷4 [0000] LB ( i,j )=( LB ( i,j+ 1)− LG ( i,j+ 1))÷2+( LB ( i− 2, j− 1)− LG ( i− 2, j− 1)+ LB ( i+ 2, j− 1)− LG ( i+ 2, j− 1))÷4   [Mathematical 13] [0082] On the other hand, in step S 417 , the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 14. This yields DR(i,j), DB(i,j), LR(i,j), and LB(i,j) of the pixel at the pixel position (i,j). [0000] DR ( i,j )=( DR ( i− 2, j )− DG ( i− 2, j ))÷4+( DR ( i+ 2, j )− DG ( i+ 2, j ))÷4+( DR ( i,j− 2)− DG ( i,j− 2))÷4+( DR ( i,j+ 2)− DG ( i,j+ 2))÷4 [0000] DB ( i,j )= DB ( i,j ) [0000] LR ( i,j )=( LR ( i,j+ 1)− LG ( i,j+ 1))÷2+( LR ( i− 2, j− 1)− LG ( i− 2, j− 1)+ LR ( i+ 2, j− 1)− LG ( i+ 2, j− 1))÷4 [0000] LB ( i,j )=( LB ( i,j− 1)− LG ( i,j− 1))÷2+( LB ( i− 2, j+ 1)− LG ( i− 2, j+ 1)+ LB ( i+ 2, j+ 1)− LG ( i+ 2, j+ 1))÷4   [Mathematical 14] [0083] In step S 418 , the demosaic unit 109 determines whether the value of the variable i is equal to pel−1. If it is determined in step S 418 that i=pel−1, the process advances to step S 419 ; or if it is determined in step S 418 that i≠pel−1, the value of the variable i is incremented by one and the processes in step S 403 and subsequent steps are repeated. [0084] In step S 419 , the demosaic unit 109 determines whether the value of the variable j is larger than line−1. If it is determined in step S 419 that j>line−1, the process ends. On the other hand, if it is determined in step S 419 that j≦line−1, the value of the variable i is initialized to zero, the value of the variable j is incremented by one, and the processes in step S 403 and subsequent steps are repeated. [0085] In this manner, the color components (DR, DG, DB, LR, LG, and LB) for each pixel which constitutes the color image are determined by performing processing according to the flowcharts shown in FIGS. 3A , 3 B, 4 A, and 4 B described above. The feature of this embodiment lies in that this determination processing is realized by the following calculation processing. That is, a color component of a given pixel sensed at the first sensitivity is obtained by interpolation calculation (first calculation) using a color component of a pixel which is adjacent to the given pixel and is sensed at the first sensitivity. A color component of a given pixel sensed at the second sensitivity is obtained by interpolation calculation (second calculation) using a color component of a pixel which is adjacent to the given pixel and is sensed at the second sensitivity. [0086] In this manner, according to this embodiment, even when pixels with different sensitivities are arranged on the same sensor, the resolution of a portion incapable of being sampled can be partially restored by performing interpolation using the correlation between a color filter with a highest resolution and multiple colors. Also, because demosaicing is performed based only on a pixel with one sensitivity, a stable resolution can be obtained regardless of saturation or unsaturation. Second Embodiment [0087] This embodiment is different from the first embodiment only in the configuration and operation of the demosaic unit 109 . A demosaic unit 109 according to this embodiment has a configuration as shown in FIG. 5 . A saturation determination unit 501 determines, for each pixel which constitutes a color image, whether the pixel value is saturated. A first interpolation unit 502 processes a pixel with a saturated pixel value, and a second interpolation unit 503 processes a pixel with an unsaturated pixel value. [0088] Since the operation of the second interpolation unit 503 is the same as that of the demosaic unit 109 , having been described in the first embodiment, only the operation of the first interpolation unit 502 will be mentioned below, and that of the second interpolation unit 503 will not be described. Also, only differences from the first embodiment will be mentioned below, and the second embodiment is the same as the first embodiment except for points to be described hereinafter. [0089] Processing with which the demosaic unit 109 according to this embodiment obtains the DG value of each pixel which constitutes a color image will be described with reference to FIGS. 6A and 6B showing flowcharts of this processing. Note that the following description assumes that a pixel value P(i,j) is stored in advance for DG(i,j). [0090] In step S 601 , the demosaic unit 109 secures a memory area, used to perform processing to be described later, in a memory which is provided in itself or managed by it, and initializes both the variables i and j indicating the pixel position in the above-mentioned color image to zero. [0091] In step S 602 , the demosaic unit 109 reads out, from the above-mentioned memory, map information (filter array) indicating which of solid-state image sensing elements DR, DG, DB, LR, LG, and LB is placed at each position on an image sensor 103 , as shown in FIG. 2 . [0092] In step S 603 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element LG. This determination is done in the same way as in step S 303 . [0093] If it is determined in step S 603 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element LG, the process advances to step S 604 ; otherwise, the process advances to step S 607 . [0094] In step S 604 , the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i,j) in the color image is saturated. If it is determined in step S 604 that this pixel value is saturated, the process advances to step S 605 ; otherwise, the process advances to step S 606 . Determination as to whether the pixel value is saturated is done in the following way. That is, if the pixel value is equal to or larger than a predetermined value, it is determined that this pixel value is saturated; or if the pixel value is smaller than the predetermined value, it is determined that this pixel value is unsaturated. Although this “predetermined value” is not limited to a specific value, the following description assumes that the maximum value of a sensor analog value is used for the sake of convenience. [0095] In step S 605 , the demosaic unit 109 operates the first interpolation unit 502 , so the first interpolation unit 502 performs interpolation processing (to be described later) to obtain the DG value=DG(i,j) of the pixel at the pixel position (i,j). Processing performed by the first interpolation unit 502 in this step will be described in detail later. [0096] In step S 606 , the demosaic unit 109 operates the second interpolation unit 503 , so the second interpolation unit 503 performs the same operation as that of the demosaic unit 109 , having been described in the first embodiment, to obtain the DG value=DG(i,j) of the pixel at the pixel position (i,j). [0097] In step S 607 , the demosaic unit 109 determines whether the value of the variable i is equal to pel−1. If it is determined in step S 607 that i=pel−1, the process advances to step S 608 ; or if it is determined in step S 607 that i≠pel−1, the value of the variable i is incremented by one and the processes in step S 603 and subsequent steps are repeated. [0098] In step S 608 , the demosaic unit 109 determines whether the value of the variable j is larger than line−1. If it is determined in step S 608 that j>line−1, the process advances to step S 609 . On the other hand, if it is determined in step S 608 that j≦line−1, the value of the variable i is initialized to zero, the value of the variable j is incremented by one, and the processes in step S 603 and subsequent steps are repeated. [0099] In step S 609 , the demosaic unit 109 initializes both the variables i and j to zero. In step S 610 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element DR or DB. This determination is done in the same way as in step S 303 mentioned above. [0100] If it is determined in step S 610 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element DR or DB, the process advances to step S 611 ; otherwise, the process advances to step S 614 . [0101] In step S 611 , the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i−1,j) or (i+1,j) in the color image is saturated. If it is determined in step S 611 that this pixel value is saturated, the process advances to step S 612 ; otherwise, the process advances to step S 613 . [0102] In step S 612 , the demosaic unit 109 operates the first interpolation unit 502 , so the first interpolation unit 502 performs interpolation processing (to be described later) to obtain the DG value=DG(i,j) of the pixel at the pixel position (i,j). Processing performed by the first interpolation unit 502 in this step will be described in detail later. [0103] In step S 613 , the demosaic unit 109 operates the second interpolation unit 503 , so the second interpolation unit 503 performs the same operation as that of the demosaic unit 109 , having been described in the first embodiment, to obtain the DG value=DG(i,j) of the pixel at the pixel position (i,j). [0104] In step S 614 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element LR or LB. This determination is done in the same way as in step S 303 mentioned above. [0105] If it is determined in step S 614 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element LR or LB, the process advances to step S 615 ; otherwise, the process advances to step S 618 . [0106] In step S 615 , the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i−1,j−1), (i−1,j+1), (i+1,j−1), or (i+1,j+1) in the color image is saturated. If it is determined in step S 615 that this pixel value is saturated, the process advances to step S 616 ; otherwise, the process advances to step S 617 . [0107] In step S 616 , the demosaic unit 109 operates the first interpolation unit 502 , so the first interpolation unit 502 performs interpolation processing (to be described later) to obtain the DG value=DG(i,j) of the pixel at the pixel position (i,j). Processing performed by the first interpolation unit 502 in this step will be described in detail later. [0108] In step S 617 , the demosaic unit 109 operates the second interpolation unit 503 , so the second interpolation unit 503 performs the same operation as that of the demosaic unit 109 , having been described in the first embodiment, to obtain the DG value=DG(i,j) of the pixel at the pixel position (i,j). [0109] In step S 618 , the demosaic unit 109 determines whether the value of the variable i is equal to pel−1. If it is determined in step S 618 that i=pel−1, the process advances to step S 619 ; or if it is determined in step S 618 that i≠pel−1, the value of the variable i is incremented by one and the processes in step S 610 and subsequent steps are repeated. [0110] In step S 619 , the demosaic unit 109 determines whether the value of the variable j is larger than line−1. If it is determined in step S 619 that j>line−1, the process ends. On the other hand, if it is determined in step S 619 that j≦line−1, the value of the variable i is initialized to zero, the value of the variable j is incremented by one, and the processes in step S 610 and subsequent steps are repeated. [0111] Processing performed by the first interpolation unit 502 will be described with reference to FIG. 7 showing a flowchart of this processing. In step S 703 , the first interpolation unit 502 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element LG. This determination is done in the same way as in step S 303 mentioned above. [0112] If it is determined in step S 703 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element LG, the process advances to step S 704 ; otherwise, the process advances to step S 705 . In step S 704 , the first interpolation unit 502 calculates an equation presented in mathematical 15 to determine DG(i,j). [0000] DG ( i,j )=α× LG ( i,j )   [Mathematical 15] [0113] where α is a constant (0<α≦1) which represents the gain and is set in advance. [0000] In step S 705 , the first interpolation unit 502 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element DR or DB. This determination is done in the same way as in step S 303 mentioned above. [0114] If it is determined in step S 705 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element DR or DB, the process advances to step S 706 ; otherwise, the process ends. In step S 706 , the first interpolation unit 502 calculates equations presented in mathematical 16 . This calculation yields an evaluation value deff5−1, variation evaluation value deff6−1, and evaluation value deff7−1. The evaluation value deff5−1 is an evaluation value for pixels adjacent to the pixel position (i,j) on the upper left and lower right sides. The variation evaluation value deff6−1 is a variation evaluation value for pixels adjacent to the pixel position (i,j) on the right and left sides. The evaluation value deff7−1 is an evaluation value for pixels adjacent to the pixel position (i,j) on the lower left and upper right sides. Note that if the calculation result of each evaluation value shows that the pixel value of one of the pair of upper left and lower right pixels is saturated, an evaluation value deff5−2 is obtained in place of the evaluation value deff5−1. Also, if the pixel value of one of the pair of right and left pixels is saturated, an evaluation value deff6−2 is obtained in place of the evaluation value deff6−1. Moreover, if the pixel value of one of the pair of lower left and upper right pixels is saturated, an evaluation value deff7−2 is obtained in place of the evaluation value deff7−1. Note that MAX is the difference between the minimum and maximum pixel values, and is 255 for a pixel value with 8 bits and 65535 for a pixel value with 16 bits. [0000] deff5−1=| P ( i− 1, j− 1)− P ( i+ 1, j+ 1)| [0000] deff5−2=MAX [0000] deff6−1=| P ( i− 1, j )− P ( i+ 1, j )| [0000] deff6−2=MAX [0000] deff7−1=| P ( i− 1, j+ 1)− P ( i+ 1, j− 1)| [0000] deff7−2=MAX   [Mathematical 16] [0115] In the following description, the obtained one of deff5−1 and deff5−2 will be represented as deff5. Similarly, the obtained one of deff6−1 and deff6−2 will be represented as deff6. Again similarly, the obtained one of deff7−1 and deff7−2 will be represented as deff7. In step S 706 , the first interpolation unit 502 compares the evaluation values deff5, deff6, and deff7. If the comparison result shows that both conditions: deff5<deff6 and deff5<deff7 are satisfied, the process advances to step S 707 ; otherwise, the process advances to step S 708 . [0116] In step S 707 , the first interpolation unit 502 performs interpolation calculation using the pixel values of pixels adjacent to the pixel position (i,j) on the upper left and lower right sides to obtain DG(i,j) of the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 17. [0000] DG ( i,j )=( P ( i− 1, j− 1)+ P ( i+ 1, j+ 1))+2   [Mathematical 17] [0117] In step S 708 , the first interpolation unit 502 compares the evaluation values deff6 and deff7. If the comparison result shows deff6<deff7, the process advances to step S 709 ; or if this comparison result shows deff6≧deff7, the process advances to step S 710 . [0118] In step S 709 , the first interpolation unit 502 performs interpolation calculation using the pixel values of pixels adjacent to the pixel position (i,j) on the right and left sides to obtain DG(i,j) of the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 18. [0000] DG ( i,j )=( P ( i− 1, j )+ P ( i+ 1, j ))÷2   [Mathematical 18] [0119] In step S 710 , the first interpolation unit 502 performs interpolation calculation using the pixel values of pixels adjacent to the pixel position (i,j) on the lower left and upper right sides to obtain DG(i,j) of the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 19. [0000] DG ( i,j )=( P ( i− 1, j+ 1)+ P ( i+ 1, j− 1))÷2   [Mathematical 19] [0120] In step S 711 , the first interpolation unit 502 performs interpolation calculation using the pixel values of pixels adjacent to the pixel position (i,j) on the right and left sides to obtain DG(i,j) of the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 20. [0000] DG ( i,j )=( P ( i− 1, j )+ P ( i+ 1, j ))÷2   [Mathematical 20] [0121] In this manner, according to this embodiment, even when pixels with different sensitivities are arranged on the same sensor, the resolution of a portion incapable of being sampled can be partially restored by performing interpolation using the correlation between a color filter with a highest resolution and multiple colors. [0122] Also, because pixel value determination which uses two pixels with different sensitivities is performed for an unsaturated pixel value, it is possible to obtain a higher resolution. Moreover, even if either pixel value is saturated, it is possible to perform pixel interpolation, which improves the resolution. Third Embodiment [0123] Another embodiment of the demosaic unit 109 according to the second embodiment will be described in the third embodiment. Note that in this embodiment, solid-state image sensing elements arranged on an image sensor 103 preferably have a layout distribution shown in FIG. 9 . Also, only differences from the second embodiment will be mentioned below, and the third embodiment is the same as the first embodiment except for points to be described hereinafter. [0124] Processing with which a demosaic unit 109 according to this embodiment obtains the DG value of each pixel which constitutes a color image will be described with reference to FIGS. 8A and 8B showing flowcharts of this processing. Note that the following description assumes that a pixel value P(i,j) is stored in advance for DG(i,j). [0125] In step S 801 , the demosaic unit 109 secures a memory area, used to perform processing to be described later, in a memory which is provided in itself or managed by it, and initializes both the variables i and j indicating the pixel position in the above-mentioned color image to zero. [0126] In step S 802 , the demosaic unit 109 reads out, from the above-mentioned memory, map information (filter array) indicating which of solid-state image sensing elements DR, DG, DB, LR, LG, and LB is placed at each position on the image sensor 103 , as shown in FIG. 9 . [0127] In step S 803 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element DG. This determination is done in the same way as in step S 303 mentioned above. [0128] If it is determined in step S 803 that the solid-state image sensing element corresponding to the pixel position (i,j) is not the solid-state image sensing element DG, the process advances to step S 804 ; otherwise, the process advances to step S 811 . [0129] In step S 804 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element LG. This determination is done in the same way as in step S 303 mentioned above. [0130] If it is determined in step S 804 that the solid-state image sensing element corresponding to the pixel position (i,j) is not the solid-state image sensing element LG, the process advances to step S 812 ; otherwise, the process advances to step S 805 . [0131] In step S 805 , the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i,j) in the color image is saturated. If it is determined in step S 805 that this pixel value is saturated, the process advances to step S 806 ; otherwise, the process advances to step S 810 . [0132] In step S 806 , a first interpolation unit 502 calculates equations presented in mathematical 21. This calculation yields an evaluation value deff8 for pixels adjacent to the pixel position (i,j) on the upper left and lower right sides, and a variation evaluation value deff9 for pixels adjacent to the pixel position (i,j) on the lower left and upper right sides. [0000] deff8=| P ( i− 1, j− 1)− P ( i+ 1, j+ 1)| [0000] deff9=| P ( i− 1, j+ 1)− P ( i+ 1, j− 1)|  [Mathematical 21] [0133] In step S 807 , the evaluation values deff8 and deff9 are compared with each other. If deff8<deff9, the process advances to step S 808 ; or if deff8≧deff9, the process advances to step S 809 . [0134] In step S 808 , the first interpolation unit 502 performs interpolation calculation using an equation presented in mathematical 22. This yields DG(i,j) of the pixel at the pixel position (i,j). [0000] DG ( i,j )=( P ( i− 1, j− 1)+ P ( i+ 1, j+ 1))÷2   [Mathematical 22] [0135] In step S 809 , the first interpolation unit 502 performs interpolation calculation using an equation presented in mathematical 23. This yields DG(i,j) of the pixel at the pixel position (i,j). [0000] DG ( i,j )=( P ( i− 1, j+ 1)+ P ( i+ 1, j− 1))÷2   [Mathematical 23] [0136] In step S 810 , the first interpolation unit 502 performs interpolation calculation using an equation presented in mathematical 24. This yields DG(i,j) of the pixel at the pixel position (i,j). [0000] DG ( i,j )=α× P ( i,j )   [Mathematical 24] [0137] In step S 811 , the demosaic unit 109 substitutes the pixel value P(i,j) for DG(i,j). In step S 812 , the demosaic unit 109 determines whether the value of the variable i is equal to pel−1. If it is determined in step S 812 that i=pel−1, the process advances to step S 813 ; or if it is determined in step S 812 that i≠pel−1, the value of the variable i is incremented by one and the processes in step S 803 and subsequent steps are repeated. [0138] In step S 813 , the demosaic unit 109 determines whether the value of the variable j is larger than line−1. If it is determined in step S 813 that j>line−1, the process advances to step S 814 . On the other hand, if it is determined in step S 813 that j≦line−1, the value of the variable i is initialized to zero, the value of the variable j is incremented by one, and the processes in step S 803 and subsequent steps are repeated. [0139] In step S 814 , the demosaic unit 109 initializes both the variables i and j to zero. In step S 815 , the demosaic unit 109 determines whether the pixel position (i,j) corresponds to DG or LG. If it is determined that the pixel position (i,j) corresponds to DG or LG, the process advances to step S 825 ; otherwise, the process advances to step S 816 . [0140] In step S 816 , the demosaic unit 109 determines whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i−2,j), (i+2,j), (i,j), (i,j−2), or (i,j+2), is saturated. This determination is done in the same way as in step S 805 . If it is determined in step S 816 that this solid-state image sensing element is saturated, the process advances to step S 817 ; otherwise, the process advances to step S 821 . [0141] In step S 817 , the first interpolation unit 502 calculates equations presented in mathematical 25 . This calculation yields an evaluation value deff10 for pixels adjacent to the pixel position (i,j) on the upper and lower sides, and a variation evaluation value deff11 for pixels adjacent to the pixel position (i,j) on the right and left sides. [0000] deff10=| P ( i,j− 1)− P ( i,j+ 1 )| [0000] deff11=| P ( i− 1, j )− P ( i+ 1, j )|  [Mathematical 25] [0142] In step S 818 , the demosaic unit 109 compares the evaluation values deff10 and deff11. If the comparison result shows deff10<deff11, the process advances to step S 819 ; or if this comparison result shows deff10≧deff11, the process advances to step S 820 . [0143] In step S 819 , the first interpolation unit 502 performs interpolation calculation using an equation presented in mathematical 26. This yields DG(i,j) of the pixel at the pixel position (i,j). [0000] DG ( i,j )=( P ( i,j− 1)+ P ( i,j+ 1))÷2   [Mathematical 26] [0144] In step S 820 , the first interpolation unit 502 performs interpolation calculation using an equation presented in mathematical 27. This yields DG(i,j) of the pixel at the pixel position (i,j). [0000] DG ( i,j )=( P ( i− 1, j )+ P ( i+ 1, j ))÷2   [Mathematical 27] [0145] In step S 821 , a second interpolation unit 503 calculates equations presented in mathematical 28. This calculation yields an evaluation value deff12 for pixels which are adjacent to the pixel position (i,j) vertically, and a variation evaluation value deff13 for pixels which are adjacent to the pixel position (i,j) horizontally. [0000] deff12=|2×( P ( i,j )− P ( i,j− 2)− P ( i,j+ 2))|+| P ( i,j− 1)− P ( i,j+ 1)| [0000] deff13=|2×( P ( i,j )− P ( i− 2, j )− P ( i+ 2, j ))|+| P ( i− 1, j )− P ( i+ 1, j )|  [Mathematical 28] [0146] In step S 822 , the demosaic unit 109 compares the evaluation values deff12 and deff13. If the comparison result shows deff12<deff13, the process advances to step S 823 ; or if this comparison result shows deff12≧deff13, the process advances to step S 824 . [0147] In step S 823 , the second interpolation unit 503 performs interpolation calculation using an equation presented in mathematical 29. This yields DG(i,j) of the pixel at the pixel position (i,j). [0000] DG ( i,j )=|2×( P ( i,j )− P ( i,j− 2)− P ( i,j+ 2))|÷4+( P ( i,j− 1)+ P ( i,j+ 1))÷2   [Mathematical 29] [0148] In step S 824 , the second interpolation unit 503 performs interpolation calculation using an equation presented in mathematical 30. This yields DG(i,j) of the pixel at the pixel position (i,j). [0000] DG ( i,j )=|2×( P ( i,j )− P ( i− 2, j )− P ( i+ 2, j ))|÷4+( P ( i− 1, j )+ P ( i+ 1, j ))=2   [Mathematical 30] [0149] In step S 825 , the demosaic unit 109 determines whether the value of the variable i is equal to pel−1. If it is determined in step S 825 that i=pel−1, the process advances to step S 826 ; or if it is determined in step S 825 that i≠pel−1, the value of the variable i is incremented by one and the processes in step S 815 and subsequent steps are repeated. [0150] In step S 826 , the demosaic unit 109 determines whether the value of the variable j is larger than line−1. If it is determined in step S 826 that j>line−1, the process ends. On the other hand, if it is determined in step S 826 that j≦line−1, the value of the variable i is initialized to zero, the value of the variable j is incremented by one, and the processes in step S 815 and subsequent steps are repeated. [0151] Processing with which the demosaic unit 109 according to this embodiment obtains the DR and DB values of each pixel which constitutes a color image will be described with reference to FIGS. 10A and 10B showing flowcharts of this processing. [0152] In step S 1001 , the demosaic unit 109 secures a memory area, used to perform processing to be described later, in a memory which is provided in itself or managed by it, and initializes both the variables i and j indicating the pixel position in the above-mentioned color image to zero. [0153] In step S 1002 , the demosaic unit 109 reads out, from the above-mentioned memory, map information (filter array) indicating which of solid-state image sensing elements DR, DG, DB, LR, LG, and LB is placed at each position on the image sensor 103 , as shown in FIG. 9 . [0154] In step S 1003 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element DG. This determination is done in the same way as in step S 303 mentioned above. If it is determined that the pixel position (i,j) corresponds to DG, the process advances to step S 1004 ; otherwise, the process advances to step S 1007 . [0155] In step S 1004 , the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i−1,j), (i+1,j), (i,j−1), or (i,j+1) in the color image is saturated. If it is determined in step S 1004 that this pixel value is saturated, the process advances to step S 1005 ; otherwise, the process advances to step S 1006 . [0156] In step S 1005 , the first interpolation unit 502 performs interpolation calculation using equations presented in mathematical 31. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position (i,j). [0000] when P(i−1,j) corresponds to LB [0000] DR ( i,j )=( DR ( i,j− 1)− DR ( i,j− 1 ))÷2+( DR ( i− 2, j+ 1)− DG ( i− 2, j+ 1)+ DR ( i+ 2, j+ 1)− DG ( i+ 2, j+ 1))÷4 [0000] DB ( i,j )=( DB ( i+ 1, j )− DG ( i+ 1, j ))÷2+( DB ( i− 1, j− 2)− DG ( i− 1, j− 2)+ DB ( i− 1, j+ 2)− DG ( i− 1, j+ 2))÷4 [0000] when P(i−1,j) corresponds to DB [0000] DR ( i,j )=( DR ( i,j+ 1)− DG ( i,j+ 1))÷2+( DR ( i− 2, j− 1)− DG ( i− 2, j− 1)+ DR ( i+ 2, j− 1)− DG ( i+ 2, j− 1))÷4 [0000] DB ( i,j )=( DB ( i− 1, j )− DG ( i− 1, j ))÷2+( DB ( i+ 1, j− 2)− DG ( i+ 1, j− 2)+ DB ( i+ 1, j+ 2)− DG ( i+ 1, j+ 2))÷4   [Mathematical 31] [0157] In step S 1006 , the second interpolation unit 503 performs interpolation calculation using equations presented in mathematical 32. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position (i,j). [0000] when P(i−1,j) corresponds to LB [0000] DR ( i,j )=( LR ( i,j− 1)− DR ( i,j+ 1))÷2 [0000] DB ( i,j )=( LB ( i− 1, j )− DB ( i+ 1, j ))÷2 [0000] when P(i−1,j) corresponds to DB [0000] DR ( i,j )=( DR ( i,j− 1)− LR ( i,j+ 1))÷2 [0000] DB ( i,j )=( DB ( i− 1, j )− LB ( i+ 1, j ))+2   [Mathematical 32] [0158] In step S 1007 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element LR. This determination is done in the same way as in step S 303 mentioned above. If it is determined that the pixel position (i,j) corresponds to LR, the process advances to step S 1008 ; otherwise, the process advances to step S 1011 . [0159] In step S 1008 , the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i,j) in the color image is saturated. If it is determined in step S 1008 that this pixel value is saturated, the process advances to step S 1009 ; otherwise, the process advances to step S 1010 . [0160] In step S 1009 , the first interpolation unit 502 performs interpolation calculation using equations presented in mathematical 33. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position (i,j). [0000] DR ( i,j )=( DR ( i− 2, j )− DG ( i− 2, j ))÷4+( DR ( i+ 2, j )− DG ( i+ 2, j ))÷4+( DR ( i,j− 2)− DR ( i,j− 2))÷4+( DR ( i,j+ 2)− DG ( i,j+ 2))÷4 [0000] DB ( i,j )=( DB ( i− 1, j− 1)+ DB ( i+ 1, j+ 1))÷2   [Mathematical 33] [0161] In step S 1010 , the second interpolation unit 503 performs interpolation calculation using equations presented in mathematical 34. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position (i,j). [0000] DR ( i,j )= P ( i,j )×α [0000] DB ( i,j )=( DB ( i− 1, j− 1)+ DB ( i+ 1, j+ 1))÷2   [Mathematical 34] [0162] In step S 1011 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element LB. This determination is done in the same way as in step S 303 mentioned above. If it is determined that the pixel position (i,j) corresponds to LB, the process advances to step S 1012 ; otherwise, the process advances to step S 1015 . [0163] In step S 1012 , the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i,j) in the color image is saturated. If it is determined in step S 1012 that this pixel value is saturated, the process advances to step S 1013 ; otherwise, the process advances to step S 1014 . [0164] In step S 1013 , the first interpolation unit 502 performs interpolation calculation using equations presented in mathematical 35. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position (i,j). [0000] DR ( i,j )=( DR ( i− 1, j− 1)+ DR ( i+ 1, j+ 1))÷2 [0000] DB ( i,j )=( DB ( i− 2, j )− DG ( i− 2, j ))÷4+( DB ( i+ 2, j )− DG ( i+ 2, j ))÷4+( DB ( i,j− 2)− DG ( i,j− 2))÷4+( DB ( i,j+ 2)− DG ( i,j+ 2))÷4   [Mathematical 35] [0165] In step S 1014 , the second interpolation unit 503 performs interpolation calculation using equations presented in mathematical 36. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position (i,j). [0000] DR ( i,j )=( DR ( i− 1, j− 1)+ DR ( i+ 1, j+ 1))÷2 [0000] DB ( i,j )= P ( i,j )×α  [Mathematical 36] [0166] In step S 1015 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element LG. This determination is done in the same way as in step S 303 mentioned above. If it is determined that the pixel position (i,j) corresponds to LG, the process advances to step S 1016 ; otherwise, the process advances to step S 1019 . [0167] In step S 1016 , the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i−1,j), (i+1,j), (i,j−1), or (i,j+1) in the color image is saturated. If it is determined in step S 1016 that this pixel value is saturated, the process advances to step S 1017 ; otherwise, the process advances to step S 1018 . [0168] In step S 1017 , the first interpolation unit 502 performs interpolation calculation using equations presented in mathematical 37. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position (i,j). [0000] when P(i−1,j) corresponds to LR [0000] DR ( i,j )=( DR ( i+ 1, j )− DG ( i+ 1, j ))÷2+( DR ( i− 1, j− 2)− DG ( i− 1, j− 2)+ DR ( i− 1, j+ 2)− DG ( i− 1, j+ 2))÷4 [0000] DB ( i,j )=( DB ( i,j− 1)− DR ( i,j− 1))÷2+( DB ( i− 2, j+ 1)− DG ( i− 2, j+ 1)+ DB ( i+ 2, j+ 1)− DG ( i+ 2, j+ 1))÷4 [0000] when P(i−1,j) corresponds to DR [0000] DR ( i,j )=( DR ( i− 1, j )− DG ( i− 1, j ))÷2+( DR ( i+ 1, j− 2)− DG ( i+ 1, j− 2)+ DR ( i+ 1, j+ 2)− DG ( i+ 1, j+ 2))÷4 [0000] DB ( i,j )=( DB ( i,j+ 1)− DG ( i,j+ 1))÷2+( DB ( i− 2, j− 1)− DG ( i− 2, j− 1)+ DB ( i+ 2, j− 1)− DG ( i+ 2, j− 1))÷4   [Mathematical 37] [0169] In step S 1018 , the second interpolation unit 503 performs interpolation calculation using equations presented in mathematical 38. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position (i,j). [0000] when P(i−1,j) corresponds to LR [0000] DR ( i,j )=( LR ( i,j− 1)− DR ( i,j+ 1))÷2 [0000] DB ( i,j )=( DB ( i− 1, j )− LB ( i+ 1, j ))÷2 [0000] when P(i−1,j) corresponds to DR [0000] DR ( i,j )=( DR ( i,j− 1)− LR ( i,j+ 1))÷2 [0000] DB ( i,j )=( LB ( i− 1, j )− DB ( i+ 1, j ))÷2   [Mathematical 38] [0170] In step S 1019 , the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103 , which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element DR. This determination is done in the same way as in step S 303 mentioned above. If it is determined that the pixel position (i,j) corresponds to DR, the process advances to step S 1020 ; otherwise, the process advances to step S 1021 . [0171] In step S 1020 , the first interpolation unit 502 performs interpolation calculation using equations presented in mathematical 39. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position (i,j). [0000] DR ( i,j )= P ( i,j ) [0000] DB ( i,j )=( DB ( i− 1, j+ 1)+ DB ( i− 1, j+ 1))÷2   [Mathematical 39] [0172] In step S 1021 , the first interpolation unit 502 performs interpolation calculation using equations presented in mathematical 40. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position (i,j). [0000] DR ( i,j )=( DR ( i− 1, j+ 1)+ DR ( i− 1, j+ 1))÷2 [0000] DB ( i,j )= P ( i,j ) [0173] As has been described above, according to this embodiment, even when pixels with different sensitivities are arranged on the same sensor, the resolution of a portion incapable of being sampled can be partially restored by performing interpolation using the correlation between a color filter with a highest resolution and multiple colors. [0174] Also, because pixel value determination which uses two pixels with different sensitivities is performed for an unsaturated pixel value, it is possible to obtain a higher resolution. Moreover, even if either pixel value is saturated, it is possible to perform pixel interpolation, which improves the resolution. Other Embodiments [0175] The present invention can also be practiced by executing the following processing. That is, software (program) which implements the functions of the above-described embodiments is supplied to a system or apparatus via a network or various kinds of storage media, and read out and executed by a computer (or, for example, a CPU or an MPU) of the system or apparatus. [0176] The present invention is not limited to the above-described embodiments, and various changes and modifications can be made within the spirit and scope of the present invention. Therefore, to apprise the public of the scope of the present invention, the following claims are made. [0177] This application claims the benefit of Japanese Patent Application Nos. 2010-010366, filed Jan. 20, 2010, and 2010-288556, filed Dec. 24, 2010, which are hereby incorporated by reference herein in their entirety.

Description

Topics

Download Full PDF Version (Non-Commercial Use)

Patent Citations (14)

    Publication numberPublication dateAssigneeTitle
    US-2004046883-A1March 11, 2004Nobuo SuzukiSolid-state image pick-up device
    US-2004201760-A1October 14, 2004Motoari Ota, Naoki KuboSolid-state imaging element and digital camera
    US-2004262493-A1December 30, 2004Nobuo SuzukiSolid-state image pickup device, image pickup unit and image processing method
    US-2006109357-A1May 25, 2006Fuji Photo Film Co., Ltd.Solid-state image pickup apparatus with high- and low-sensitivity photosensitive cells, and an image shooting method using the same
    US-2007206110-A1September 06, 2007Fujifilm CorporationSolid state imaging device and image pickup apparatus
    US-2008218598-A1September 11, 2008Sony CorporationImaging method, imaging apparatus, and driving device
    US-2009059051-A1March 05, 2009Tomohiro SakamotoImaging apparatus and driving method for ccd type solid-state imaging device
    US-2010080458-A1April 01, 2010Sony CorporationImage processing apparatus, image processing method, and computer program
    US-6211915-B1April 03, 2001Sony CorporationSolid-state imaging device operable in wide dynamic range
    US-7116338-B2October 03, 2006Canon Kabushiki KaishaColor information processing apparatus and method
    US-7623138-B2November 24, 2009Canon Kabushiki KaishaColor information processing apparatus and method
    US-7986360-B2July 26, 2011Sony CorporationImage processing apparatus and method for generating a restoration image
    US-8031235-B2October 04, 2011Fujifilm CorporationImaging apparatus and signal processing method
    US-8300131-B2October 30, 2012Fujifilm CorporationImage pickup device for wide dynamic range at a high frame rate

NO-Patent Citations (0)

    Title

Cited By (3)

    Publication numberPublication dateAssigneeTitle
    CN-103139488-AJune 05, 2013恒景科技股份有限公司High dynamic range image sensing device and image sensing method and manufacturing method thereof
    US-2014211060-A1July 31, 2014Sharp Kabushiki KaishaSignal processing apparatus and signal processing method, solid-state imaging apparatus, electronic information device, signal processing program, and computer readable storage medium
    US-9160937-B2October 13, 2015Sharp Kabushika KaishaSignal processing apparatus and signal processing method, solid-state imaging apparatus, electronic information device, signal processing program, and computer readable storage medium