Giter Site home page Giter Site logo

h-net's Introduction

H-Net

This repository is a tensorflow implementation for

Liu, Weiquan, Xuelun Shen, Cheng Wang, Zhihong Zhang, Chenglu Wen, and Jonathan Li. "H-Net: Neural Network for Cross-domain Image Patch Matching." In IJCAI, pp. 856-863. 2018.

If you use this code in your research, please cite the paper.

architecture

Environment

This code is based on Python3 and tensorflow with CUDA 9.0.

Pretrained models

  1. Download the Pretrained model.
  2. Extract them to the current folder so that they fall under model for example.

Cross Domain Dataset

There are 160 pair full size images for training. You can process it as you wish.

Download test data and extract it. (Remember to change the data_base_path in file trainHNet.py)

h-net's People

Contributors

xuelunshen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

h-net's Issues

Single Prediction using Trained Network

Hi everyone,

I have tested the network successfully, Now, for example, I have a pair of images and I want to pass it through the network. How I can pass an image pair to a pre-trained network and get a probability score about their similarities?

Thanks,
Ali

OutOfRangeError: RandomShuffleQueue '_1_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 32, current size 0)

Hello,

I am trying to train H-Net on my own dataset to find the similarity between real and a simulated image. During training I am getting the following error:

tensorflow.python.framework.errors_impl.OutOfRangeError: RandomShuffleQueue '_1_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 32, current size 0) [[{{node shuffle_batch}}]]

I believe that the error is caused by tfrecords. The following script I wrote to create the tfrecords:

import tensorflow as tf
import cv2
import sys
import numpy as np
from random import shuffle
import glob
import os
shuffle_data = True 


addrs = []

DATADIR_PHOTO = r"X:\xx\xx\real" 
DATADIR_UDS = r"X:\xx\xx\simulated"

for (photo, uds) in zip(os.listdir(DATADIR_PHOTO), os.listdir(DATADIR_UDS)):
        addrs.append([os.path.join(DATADIR_PHOTO, photo), os.path.join(DATADIR_UDS,uds)])
   
labels = [-1 if 'real' in addr else -1 for addr in addrs]

# to shuffle data
if shuffle_data:
    c = list(zip(addrs, labels))
    shuffle(c)
    addrs, labels = zip(*c)

# Divide the hata into 60% train, 20% validation, and 20% test
train_addrs = addrs[0:int(0.6*len(addrs))]
train_labels = labels[0:int(0.6*len(labels))]

val_addrs = addrs[int(0.6*len(addrs)):int(0.8*len(addrs))]
val_labels = labels[int(0.6*len(addrs)):int(0.8*len(addrs))]

test_addrs = addrs[int(0.8*len(addrs)):]
test_labels = labels[int(0.8*len(labels)):]    


def load_image(addr):
    # read an image and resize to (224, 224)
    # cv2 load images as BGR, convert it to RGB
    img = cv2.imread(addr)
    img = cv2.resize(img, (224, 224), interpolation=cv2.INTER_CUBIC)
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    img = img.astype(np.uint8)
    return img


def _int64_feature(value):
    return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))


def _bytes_feature(value):
    return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))

train_filename = 'train.tfrecords'  # address to save the TFRecords file
# open the TFRecords file
writer = tf.python_io.TFRecordWriter(train_filename)

for i in range(len(train_addrs)):
    
    # print how many images are saved every 1000 images
    if not i % 1000:
        print ('Train data: {}/{}'.format(i, len(train_addrs)))
        sys.stdout.flush()
    # Load the image
    photo = load_image(train_addrs[i][0])
    uds = load_image(train_addrs[i][1])
    label = train_labels[i]
     
    # Create a feature
    feature = {'label': _int64_feature(label),
               'photo': _bytes_feature(tf.compat.as_bytes(photo.tostring())),
               'uds': _bytes_feature(tf.compat.as_bytes(uds.tostring()))}
   
    # Create an example protocol buffer
    example = tf.train.Example(features=tf.train.Features(feature=feature))
    
    # Serialize to string and write on the file
    writer.write(example.SerializeToString())
    
writer.close()
sys.stdout.flush()

# open the TFRecords file
val_filename = 'val.tfrecords'  # address to save the TFRecords file
writer = tf.python_io.TFRecordWriter(val_filename)
for i in range(len(val_addrs)):
    # print how many images are saved every 1000 images
    if not i % 1000:
        print ('Val data: {}/{}'.format(i, len(val_addrs)))
        sys.stdout.flush()
    # Load the image
    photo = load_image(val_addrs[i][0])
    uds = load_image(val_addrs[i][1])
    label = val_labels[i]
    # Create a feature
    feature = {'label': _int64_feature(label),
               'photo': _bytes_feature(tf.compat.as_bytes(photo.tostring())),
               'uds': _bytes_feature(tf.compat.as_bytes(uds.tostring()))}
    # Create an example protocol buffer
    example = tf.train.Example(features=tf.train.Features(feature=feature))
    # Serialize to string and write on the file
    writer.write(example.SerializeToString())
writer.close()
sys.stdout.flush()

# open the TFRecords file
test_filename = 'test.tfrecords'  # address to save the TFRecords file
writer = tf.python_io.TFRecordWriter(test_filename)
for i in range(len(test_addrs)):
    # print how many images are saved every 1000 images
    if not i % 1000:
        print ('Test data: {}/{}'.format(i, len(test_addrs)))
        sys.stdout.flush()
    # Load the image
    photo = load_image(test_addrs[i][0])
    uds = load_image(test_addrs[i][1])
    label = test_labels[i]
    # Create a feature
    feature = {'label': _int64_feature(label),
               'photo': _bytes_feature(tf.compat.as_bytes(photo.tostring())),
               'uds': _bytes_feature(tf.compat.as_bytes(uds.tostring()))}
    # Create an example protocol buffer
    example = tf.train.Example(features=tf.train.Features(feature=feature))
    # Serialize to string and write on the file
    writer.write(example.SerializeToString())
writer.close()
sys.stdout.flush()

The output of the tfrecord script was:

Train data: 0/6324
Train data: 1000/6324
Train data: 2000/6324
Train data: 3000/6324
Train data: 4000/6324
Train data: 5000/6324
Train data: 6000/6324
Val data: 0/2108
Val data: 1000/2108
Val data: 2000/2108
Test data: 0/2108
Test data: 1000/2108
Test data: 2000/2108

Could you please point me out where I am making mistake? It will be highly appreciated.

loss to nan

用MSE这个损失函数损失太大怎么办啊

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.