开发者

Python 图像处理之颜色迁移(reinhard VS welsh)

目录
  • 前言
  • 应用场景
  • 出发点
  • reinhard算法流程
  • welsh算法流程
  • Reinhard VS welsh
  • 代码实现
    • Reinhard
    • Welsh代码
  • 效果对比

    前言

    reinhard算法:Color Transfer between Images,作者Erik Reinhard

    welsh算法:Transferring Color to Greyscale Images,作者Tomihisa Welsh

    应用场景

    人像图换肤色,风景图颜色迁移

    出发点

    1. RGB三通道有很强的关联性,而做颜色的改变同时恰当地改变三通道比较困难。
    2. 需要寻找三通道互不相关的也就是正交的颜色空间,作者想到了Ruderman等人提出的l颜色空间。三个轴向正交意味着改变任何一个通道都不影响其他通道,从而能够较好的保持原图的自然效果。三个通道分别代表:亮度,黄蓝通道,红绿通道。

    reinhard算omIRptvEKP法流程

    1. 输入变换图,颜色参考图,将其都从bgr空间转化为lab空间
    2. 分别计算变换图,参考图在lab空间的均值,方差
    3. (变换图lab - 变换图均值)/变换图方差 *参考图方差 + 参考图均值
    4. 变换图lab空间转化为bgr空间,输出结果

    welsh算法流程

    1. 输入变换图,颜色参考图,将其都从bgr空间转化为lab空间
    2. 定义随机参考点个数segment,领域空间大小window_size,加权系数ratio。从参考图片中随机选择segment个样本点,将这些样本点的像素亮度值L和L空间window_size领域内得方差保存起来,求这2个的加权W,W = L* ratio+ *(1-ratio)。这样就可以得到segment个W,以及与其一一对应的a通道,b通道对应位置的数值。
    3. 对变换图的L通道基于颜色参考图的L通道进行亮度重映射,保证后续的像素匹配正确进行
    4. 对变换图进行逐像素扫描,对每个像素,计算其权值W,计算方式和上面一样。然后在第二步得到的样本点中找到与其权值最接近的参考点,并将该点的a通道和b通道的值赋给变换图的a通道和b通道。
    5. 将变换图从Lab空间转化到bgr空间。

    Reinhard VS welsh

    1. Reinhard 操作简单,高效,速度快很多。
    2. welsh算法涉及到了参考图的W的计算,如果是参考图固定且已知的场景,这一步可以放入初始化中。如果不是这样的场景,那么这一步的计算也是很费时的。
    3. welsh整体速度慢很多,主要由于求方差造成。
    4. welsh的输出效果,受随机参考点个数以及位置的影响,每次的结果都会有差异。
    5. welsh的效果会有种涂抹不均匀的感觉,Reinhard 则没有这种问题。

    代码实现

    Reinhard

    def color_trans_reinhard(in_img, ref_img, in_mask_lists=[None], ref_mask_lists=[None]):
        ref_img_lab = cv2.cvtColor(ref_img, cv2.COLOR_BGR2LAB)
        in_img_lab = cv2.cvtColor(in_img, cv2.COLOR_BGR2LAB)
     
        in_avg = np.ones(in_img.shape, np.float32)
        in_std = np.ones(in_img.shape, np.float32)
        ref_avg = np.ones(in_img.shape, np.float32)
        ref_std = np.ones(in_img.shape, np.float32)
     
        mask_all = np.zeros(in_img.shape, np.float32)
        for in_mask, ref_mask in zip(in_mask_lists, ref_mask_lists):
            #mask,取值为 0, 255, shape[height,width]
            in_avg_tmp, in_std_tmp = cv2.meanStdDev(in_img_lab, mask=in_mask)
            np.copyto(in_avg, in_avg_tmp.reshape(1,1,-1), where=np.expand_dims(in_mask,2)!=0) #numpy.copyto(destination, source)
            np.copyto(in_std, in_std_tmp.reshape(1,1,-1), where=np.expand_dims(in_mask,2)!=0) 
     
            ref_avg_tmp, ref_std_tmp = cv2.meanStdDev(ref_img_lab, mask=ref_mask)
            np.copyto(ref_avg, ref_avg_tmp.reshape(1,1,-1), where=np.expand_dims(in_mask,2)!=0) #numpy.copyto(destination, source)
            np.copyto(ref_std, ref_std_tmp.reshape(1,1,-1), where=np.expand_dims(in_mask,2)!=0) 
     
            #mask
            mask_all[in_mask!=0] = 1
     
        in_std[in_std==0] =1 #避免除数为0的情况
        transfered_lab = (in_img_lab - in_avg)/(in_std) *ref_std + ref_avg 
        transfered_lab[transfered_lab<0] = 0
        transfered_lab[transfered_lab>255] = 255
     
        out_img = cv2.cvtColor(transfered_lab.astype(np.uint8), cv2.COLOR_LAB2BGR)
     
        if in_mask_lists[0] is not None and ref_mask_lists[0] is not None:
            np.copyto(out_img, in_img, where=mask_all==0) 
            
        return out_img
     
     
    """
    #img1 = cv2.imread("imgs/1.png")
    #img2 =www.cppcns.com cv2.imread("imgs/2.png")
    #img1 = cv2.imread("welsh22/1.png", 1)
    #img2 = cv2.imread("welsh22/2.png", 1)
    img1 = cv2.imread("welsh22/gray.jpg", 1)
    img2 = cv2.imread("welsh22/consult.jpg", 1)
    cv2.imwrite("out.jpg", color_trans_reinhard(img1, img2, [np.ones(img1.shape[:-1],np.uint8)*255], [np.ones(img2.shape[:-1],np.uint8)*255]))
    """
    img1 = cv2.imread("ab.jpeg")
    img2 = cv2.imread("hsy.jpeg")
    mask1 = cv2.imread("ab_parsing.jpg", 0)
    mask1[mask1<128]=0
    mask1[mask1>=128]=255
    mask2 = cv2.imread("hsy_parsing.jpg", 0)
    mask2[mask2<128]=0
    mask2[mask2>=128]=255
    cv2.imwrite("out.jpg", color_trans_reinhard(img1, img2, [mask1], [mask2]))

    Welsh代码

    改进点

    1. 主要是去掉for循环操作。
    2. 将计算一个领域内的std,使用均值滤波+numpy实现近似替换。差别目测看不出。
    3. 修改参考图的weight,全部int化,只保留不一样的weight,实际测试大概150个左右的weight就可以。
    4. 修改最近weight查找思路,使用numpy减法操作+argmin,替换2分查找。
    5. 整体速度比原始代码快18倍。
    def get_domain_std(img_l, pixel, height, width, window_size):
        window_left = max(pixel[1] - window_size, 0)
        window_right = min(pixel[1] + window_size + 1, width)
        window_top = max(pixel[0] - window_size, 0)
        window_bottom = min(pixel[0] + window_size + 1, height)
     
        window_slice = img_l[window_top: window_bottom, window_left: window_right]
     
        return np.std(window_slice)
     
     
    def get_weight_pixel(ref_img_l, ref_img_a, ref_img_b, ref_img_height, ref_img_width, segment, window_size, ratio, ref_mask_lists=[None]):
        weight_list = []
        pixel_a_list = []
        pixel_b_list = []
     
        ref_img_mask  = np.ones((ref_img_height, ref_img_width), np.uint8)
        if ref_mask_lists[0] is not None:
            for x in ref_mask_lists:
                ref_img_mask = np.bitwise_or(x, ref_img_mask)
     
        ref_img_l_mean = cv2.blur(ref_img_l, (window_size, window_size))
        ref_img_l_std = np.sqrt(cv2.blur(np.power((ref_img_l - ref_img_l_mean), 2),  (window_size, window_size)))
        for _ in range(segment):
            height_index = np.random.randint(ref_img_height)
            width_index = np.random.randint(ref_img_width)
     
                
            pixel = [height_index, width_index]  #[x,y]
     
            if ref_img_mask[pixel[0], pixel[1]] == 0:
                continue
     
            pixel_light = ref_img_l[pixel[0], pixel[1]]
            pixel_a = ref_img_a[pixel[0], pixel[1]]
            pixel_b = ref_img_b[pixel[0], pixel[1]]
     
            #pixel_std = get_domain_std(ref_img_l, pixel, ref_img_height, ref_img_width, window_size)
            pixel_std = ref_img_l_std[height_index, width_index]
     
            weight_value = int(pixel_light * ratio + pixel_std * (1 - ratio))
            if weight_value not in weight_list:
                weight_list.append(weight_value)
                pixel_a_list.append(pixel_a)
                pixel_b_list.append(pixel_b)                          
     
        return np.array(weight_list), np.array(pixel_a_list), np.array(pixel_b_list)
     
     
     
    def color_trans_welsh(in_img, ref_img, in_mask_lists=[None], ref_mask_lists=[None]):
        start = time.time()
        #参考图
        ref_img_height, ref_img_width, ref_img_channel = ref_img.shape
        window_size=5 #窗口大小
        segment= 10000#随机点个数
        ratio=0.5     #求weight的比例系数
     
        ref_img_lab = cv2.cvtColor(ref_img, cv2.COLOR_BGR2Lab)
        ref_img_l, ref_img_a, ref_img_b = cv2.split(ref_img_lab)
     
        #计算参考图weight
        ref_img_weight_array, ref_img_pixel_a_array, ref_img_pixel_b_array =  get_weight_pixel(ref_img_l, ref_img_a, ref_img_b, ref_img_height, ref_img_width, segment, window_size, ratio, ref_mask_lists)
     
        ref_img_max_pixel, ref_img_min_pixel = np.max(ref_img_l), np.min(ref_img_l)
     
     
        #输入图
        in_img_height, in_img_width, in_img_channel = in_img.shape
        in_img_lab = cv2.cvtColor(in_img, cv2.COLOR_BGR2LAB)
     
        # 获取灰度图像的亮度信息;
        in_img_l, in_img_a, in_img_b = cv2.split(in_img_lab)
     
        in_img_max_pixel, in_img_min_pixel = np.max(in_img_l), np.min(in_img_l)
        pixel_ratio = (ref_img_max_pixel - ref_img_min_pixel) / (in_img_max_pixel - in_img_min_pixel)
     
        # 把输入图像的亮度值映射到参考图像范围内;
        in_img_l = ref_img_min_pixel + (in_img_l - in_img_min_pixel) * pixel_ratio
        in_img_l = in_img_l.astype(np.uint8)
     
     
        in_img_l_mean = cv2.blur(in_img_l, (window_size, window_size))
        in_img_l_std = np.sqrt(cv2.blur(np.power((in_img_l - in_img_l_mean), 2),  (window_size, window_size)))
     
     
        in_img_weight_pixel = ratio * in_img_l + (1 - ratio) * in_img_l_std
     
        nearest_pixel_index = np.argmin(np.abs(ref_img_weight_array.reshape(1,1,-1) - np.expand_dims(in_img_weight_pixel, 2)), axis=2).astype(np.float32)
     
        in_img_a = cv2.remap(ref_img_pixel_a_array.reshape(1, -1), nearest_pixel_index, np.zeros_like(nearest_pixel_index, np.float32), interpolation=cv2.INTER_LINEAR)
        in_img_b = cv2.remap(ref_img_pixel_b_array.reshape(1, -1), nearest_pixel_index, np.zeros_like(nearest_pixel_index, np.float32), interpolation=cv2.INTER_LINEAR)
     
     
        merge_img = cv2.merge([in_img_l, in_img_a, in_img_b])
        bgr_img = cv2.cvtColor(merge_img, cv2.COLOR_LAB2BGR)
     
        
        mask_all = np.zeros(in_img.shape[:-1], np.int32)
        if in_mask_lists[0] is not None and ref_mask_lists[0] is not None:
            for x in in_mask_lists:
                mask_all = np.bitwise_or(x, mask_all)
            mask_all = cv2.merge([mask_all, mask_all, mask_all])
            np.copyto(bgr_img, in_img, where=mask_all==0) 
        
        end = time.time()
        print("time", end-start)
        return bgr_img
     
     
     
    if __name__ == '__main__www.cppcns.com':
     
        # 创建参考图像的分析类;
        #ref_img = cv2.imread("consult.jpg")
        #ref_img = cv2.imread("2.png")
        ref_img = cv2.imread("../imgs/2.png")
     
        # 读取灰度图像;opencv默认读取的是3通道的,不需要我们扩展通道;
        #in_img = cv2.imread("gray.jpg")
        #in_img = cv2.iomIRptvEKPmread("1.png")
        in_img = cv2.imread("../imgs/1.png")
     
        bgr_img = color_trans_welsh(in_img, ref_img)
        cv2.imwrite("out_ren.jpg", bgr_img)
        """
        ref_img = cv2.imread("../hsy.jpeg")
        ref_mask = cv2.imread("../hsy_parsing.jpg", 0)
        ref_mask[ref_mask<128] = 0
        ref_mask[ref_mask>=128] = 255
        in_img = cv2.imread("../ab.jpeg")
        in_mask = cv2.imread("../ab_parsing.jpg", 0)
        in_mask[in_mask<128] = 0
        in_mask[in_mask>=128] = 255
        bgr_img = color_trans_welsh(in_img, ref_img, in_mask_lists=[in_mask], ref_mask_lists=[ref_mawww.cppcns.comsk])
        cv2.imwrite("bgr.jpg", bgr_img)
        """

    效果对比

    从左到右,分别为原图,参考图,reinhard效果,welsh效果

    Python&nbsp;图像处理之颜色迁移(reinhard&nbsp;VS&nbsp;welsh)

    Python&nbsp;图像处理之颜色迁移(reinhard&nbsp;VS&nbsp;welsh)

    Python&nbsp;图像处理之颜色迁移(reinhard&nbsp;VS&nbsp;welsh)

    从左到右,分别为原图,原图皮肤mask,参考图,参考图皮肤mask,reinhard效果,welsh效果

    Python&nbsp;图像处理之颜色迁移(reinhard&nbsp;VS&nbsp;welsh)

    以上就是python 图像处理之颜色迁移(reinhard VS welsh)的详细内容,更多关于Python 颜色迁移的资料请关注我们其它相关文章!

    0

    上一篇:

    下一篇:

    精彩评论

    暂无评论...
    验证码 换一张
    取 消

    最新开发

    开发排行榜