Giter Site home page Giter Site logo

imagefeatures.jl's Introduction

ImageFeatures

imagefeatures.jl's People

Contributors

annimesh2809 avatar ashwani-rathee avatar cdsousa avatar davidbp avatar deepank308 avatar evizero avatar femtocleaner[bot] avatar github-actions[bot] avatar gzhang8 avatar hyrodium avatar johnnychen94 avatar juliatagbot avatar jw3126 avatar mathieu17g avatar mprat avatar mronian avatar pitmonticone avatar tejus-gupta avatar timholy avatar zygmuntszpak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

imagefeatures.jl's Issues

Deploy docs with travis

I spent a lot of time trying to install the travis client on my system to sync the keys but there is some problem and I haven't been able to fix it. @timholy Incase you have it installed on your system, could you generate the travis key and .documenter.enc file ?

GSoC Completion

@timholy Google requires the students to submit a single link having the details of what work was done over the summer which will be displayed on their website and for their assessment. I was thinking of submitting something like this (its not yet done) -

https://mronian.github.io/gsoc-2016/

I will put small details about the APIs etc under each heading. Is that good enough?

brief.jl:There is an error in the range

I think there was an error in brief.jl, the default value range of rand() is [0,1), so I think in the random_unfiorm function, rand() should be changed to (rand().-0.5) * 2

julia> brief_params = BRIEF(sampling_type = random_uniform)
BRIEF{typeof(random_uniform)}(128, 9, 1.4142135623730951, ImageFeatures.random_uniform, 123)
julia> brief_params.window      # window = 9, so the value of CartesianIndex should in [-4:4]
9
julia> B = brief_params.sampling_type(brief_params.size, brief_params.window, brief_params.seed)     # All values of the index are positive and in [0:4].
(CartesianIndex{2}[CartesianIndex(4, 3), CartesianIndex(2, 1), CartesianIndex(0, 1), CartesianIndex(2, 3), CartesianIndex(2, 1), CartesianIndex(1, 3), CartesianIndex(4, 0), CartesianIndex(4, 2), CartesianIndex(4, 1), CartesianIndex(4, 4)    CartesianIndex(2, 0), CartesianIndex(2, 4), CartesianIndex(1, 1), CartesianIndex(3, 4), CartesianIndex(3, 1), CartesianIndex(4, 4), CartesianIndex(1, 1), CartesianIndex(2, 3), CartesianIndex(1, 0), CartesianIndex(3, 3)], CartesianIndex{2}[CartesianIndex(1, 3), CartesianIndex(0, 2), CartesianIndex(2, 0), CartesianIndex(2, 1), CartesianIndex(1, 2), CartesianIndex(3, 1), CartesianIndex(3, 2), CartesianIndex(3, 2), CartesianIndex(0, 2), CartesianIndex(1, 2)    CartesianIndex(1, 2), CartesianIndex(2, 0), CartesianIndex(2, 4), CartesianIndex(1, 4), CartesianIndex(0, 2), CartesianIndex(1, 0), CartesianIndex(0, 2), CartesianIndex(4, 3), CartesianIndex(3, 1), CartesianIndex(0, 1)])

coords_spatial not defined

I think this might be caused by the renaming planned
https://github.com/JuliaImages/Images.jl/issues/767

[92ff4b2b] ImageFeatures v0.4.5
[916415d5] Images v0.24.1

Works in Images v0.24.0

UndefVarError: coords_spatial not defined

findlocalmaxima(::Matrix{Int64})@algorithms.jl:427
var"#hough_circle_gradient#77"(::Int64, ::Int64, ::Int64, ::typeof(ImageFeatures.hough_circle_gradient), ::Matrix{Bool}, ::Matrix{Float64}, ::UnitRange{Int64})@houghtransform.jl:204
hough_circle_gradient(::Matrix{Bool}, ::Matrix{Float64}, ::UnitRange{Int64})@houghtransform.jl:163
(::Main.workspace65.var"#1#2")(::String)@Local: 8
[email protected]:47[inlined]
_collect(::Vector{String}, ::Base.Generator{Vector{String}, Main.workspace65.var"#1#2"}, ::Base.EltypeUnknown, ::Base.HasShape{1})@array.jl:691
collect_similar(::Vector{String}, ::Base.Generator{Vector{String}, Main.workspace65.var"#1#2"})@array.jl:606
map(::Function, ::Vector{String})@abstractarray.jl:2294
top-level scope@Local: 3

Getting the votes from the Hough transform

In the Hough transform, the votes give a good indication of how likely a certain pair of r and θ represent a line. But depending on the thickness and straightness of the lines in the image, the Hough transform can return multiple pairs of r and θ that represent lines that are very similar to each (i.e. colinear lines). We can then cluster the r and θ pairs into groups (especially useful when we know how many lines there are in the image), and use the centers of each group to get a mean line. But it would be even better to have the votes per pair of r and θ so that we can weigh the r and θ estimates accordingly when calculating the mean per group (and/or even before, during the clustering).

hough_circle_gradient: centers could hold 0 in CartesianIndex

centers calculation could result in 0 in CartesianIndex. which is invalid value for array referencing in Julia

julia> centers, radii = hough_circle_gradient(img_edges, img_phase, 20:30, vote_threshold=2);

julia> img_demo = Float64.(img_edges); for c in centers img_demo[c] = 2; end
ERROR: BoundsError: attempt to access 312×252 Array{Float64,2} at index [152, 0]
Stacktrace:
 [1] setindex! at ./array.jl:769 [inlined]
 [2] setindex!(::Array{Float64,2}, ::Int64, ::CartesianIndex{2}) at ./multidimensional.jl:460
 [3] top-level scope at ./REPL[40]:1 [inlined]
 [4] top-level scope at ./none:0

julia> centers
96-element Array{CartesianIndex{2},1}:
 CartesianIndex(152, 0)  
                        
 CartesianIndex(0, 40)   

Probably this happens during executing this statement: center=(center-1*one(center))*scale

julia> c = CartesianIndex(1, 2)
CartesianIndex(1, 2)

julia> c = (c - 1*one(c))
CartesianIndex(0, 1)

HOG documentation

Either here or at JuliaImages, it would be great to document HOG (there have been many requests).

Dependency on Images.jl directly

I realize this may be an odd question.

It came up in JuliaImages/Images.jl#682 (comment) that ImageFeatures.jl depends on Images.jl, and I found this quite surprising because my mental image for Images.jl so far was as a convenient meta package for importing the whole ecosystem.

So I just felt like double checking this wouldn't hurt. Is there a reason that this package needs to depend on Images.jl directly, instead of the individual backend packages? I quickly browsed through the source files and didn't see any obvious reason like extending a function.

HOG not working

Hello,
I have tried the HOG code but it does not seem to work.
The following code:

img = testimage("lena_gray")
create_descriptor(img, HOG()) 

returns

BoundsError: attempt to access 9×32×32 Array{Float64,3} at index [10, 1, 1]

Stacktrace:
 [1] trilinear_interpolate!(::Array{Float64,3}, ::Float32, ::Float32, ::Int64, ::CartesianIndex{2}, ::Int64, ::Int64, ::Int64, ::Int64, ::Int64) at /Users/davidbuchaca1/.julia/v0.6/ImageFeatures/src/hog.jl:132
 [2] create_hog_descriptor(::Array{Float32,2}, ::Array{Float32,2}, ::ImageFeatures.HOG) at /Users/davidbuchaca1/.julia/v0.6/ImageFeatures/src/hog.jl:82
 [3] create_descriptor(::Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}, ::ImageFeatures.HOG) at /Users/davidbuchaca1/.julia/v0.6/ImageFeatures/src/hog.jl:34
 [4] include_string(::String, ::String) at /Applications/Julia-0.6.app/Contents/Resources/julia/lib/julia/sys.dylib:?

Am I doing something wrong?

Build homography from array of matched points

I have gone over the documentation looking for a method to use the data returned by match_keypoints to return a homography matrix as in ImageFeatures.qd_rigit with no success. Does there exist such a method in the package? I know I can set up a linear system to get the matrix, but this seems (to me) like natural feature to include in the package.

`sampled_intensities` `eltype` issue in `create_descriptor` for FREAK and BRISK

In the create_descriptor functions for FREAK and BRISK, the sampled_intensities array is declared with the same type of the input image, but it should be declared with the same type as the integral image of the input image.

For some images I'm working with, it leads to the following error:

julia> using Images, ImageFeatures
       img = load("test_image.png")
       img1 = Gray.(img)::Matrix{Gray{N0f8}} 
       feats_1 = Features(fastcorners(img1, 20, 0.5))
       brisk_params = BRISK()
       desc_1, ret_feats_1 = create_descriptor(img1, feats_1, brisk_params)
       nothing
ERROR: ArgumentError: element type N0f8 is an 8-bit type representing 256 values from 0.0 to 1.0,
  but the values (1.0034722f0,) do not lie within this range.
  See the READMEs for FixedPointNumbers and ColorTypes for more information.
Stacktrace:
  [1] throw_colorerror_(#unused#::Type{N0f8}, values::Tuple{Float32})
    @ ColorTypes ~/.julia/packages/ColorTypes/1dGw6/src/types.jl:686
  [2] throw_colorerror(#unused#::Type{Gray{N0f8}}, values::Tuple{Float32})
    @ ColorTypes ~/.julia/packages/ColorTypes/1dGw6/src/types.jl:736
  [3] checkval
    @ ~/.julia/packages/ColorTypes/1dGw6/src/types.jl:648 [inlined]
  [4] Gray
    @ ~/.julia/packages/ColorTypes/1dGw6/src/types.jl:359 [inlined]
  [5] _convert
    @ ~/.julia/packages/ColorTypes/1dGw6/src/conversions.jl:96 [inlined]
  [6] cconvert
    @ ~/.julia/packages/ColorTypes/1dGw6/src/conversions.jl:76 [inlined]
  [7] convert
    @ ~/.julia/packages/ColorTypes/1dGw6/src/conversions.jl:73 [inlined]
  [8] push!
    @ ./array.jl:1060 [inlined]
  [9] create_descriptor(img::Matrix{Gray{N0f8}}, features::Vector{Feature}, params::BRISK)
    @ ImageFeatures ~/.julia/dev/ImageFeatures/src/brisk.jl:95
 [10] top-level scope
    @ REPL[1]:6

Here is the test image, to reproduce the issue if needed:
test_image

The fix could be to replace

sampled_intensities = T[]

by

        sampled_intensities = eltype(int_img)[]

Rationale for Vector of CartesianIndex{2} for output 'matches' of match_keypoints

I was wondering if someone could explain the design choice for making the match_keypoints function return Vector{CartesianIndex{2}}.

Coming from MATLAB, I was expecting this function to return two matrices, either an N x 2 matrix, or a 2 x N matrix, where the first matrix contains the points in the first view, and the second matrix contains the points in the second view. I work in the field of Multiple View Geometry (MVG), and am in the process of writing a package for Julia that achieves feature parity with MATLAB's Computer Vision toolbox. Typical steps in MVG include converting between homogeneous coordinates (i.e. appending an extra dimension with the value of '1'), standardising the homogeneous coordinates (dividing each point by its third dimension) and converting back to Cartesian coordinates (removing the third coordinate after standardising). Furthermore, we often need to transform coordinates by making the data points have zero mean, and scaling each data point to lie inside a unit box. These steps are very easy when the points are represented as matrices. For example, it allows one to write code such as mean(pts[1,:]) to obtain the mean x coordinate. Transforming all of the data points is also simple, since it just involves multiplying the points by a matrix.

I am struggling to perform these types of operations elegantly with the Vector{CartesianIndex{2}} data structure. At present, I intend to loop over this data structure and explicitly form the two matrices I mentioned. However, this duplication of work and unnecessary performance hit feels like a hack.

Perhaps we could write a second variant of the match_keypoints function that returns two matrices instead? Or alternatively, there is some elegant Julian way of achieving the transformations I mentioned with this Vector of CartesianIndex type?

corner.jl: ERROR: MethodError: no method matching dotc(::RGBA{Float64}, ::RGBA{Float64})

where is dotc defined?: It is not the LinearAlgebra BLAS.dotc
Thought using ColorVectorSpace.dotc might be a solution but to no avail.

function gradcovs(img::AbstractArray, border::AbstractString = "replicate"; weights::Function = meancovs, args...)
(grad_x, grad_y) = imgradients(img, KernelFactors.sobel, border)

cov_xx = dotc(grad_x, grad_x)
cov_xy = dotc.(grad_x, grad_y)
cov_yy = dotc.(grad_y, grad_y)

weights(cov_xx, cov_xy, cov_yy, args...)

end
Any Suggestions?

Error while trying to run the HOG example

I am trying to run the example here, after following the mentioned instructions. However I am running into the following error,

ERROR: BoundsError: attempt to access 9×16×8 Array{Float64,3} at index [10, 1, 1]
Stacktrace:
 [1] trilinear_interpolate!(::Array{Float64,3}, ::Float64, ::Float64, ::Int64, ::CartesianIndex{2}, ::Int64, ::Int64, ::Int64, ::Int64, ::Int64) at /Users/abhijith/.julia/v0.6/ImageFeatures/src/hog.jl:160

In the following iterator in hog.jl,

for i in R
        trilinear_interpolate!(hist, mag[i], phase[i], orientations, i, cell_size, Int(cell_rows), Int(cell_cols), rows, cols)
end

the error pops for i=CartesianIndex{2}((6, 1)). I tried with Gray.(img) too with the same problem.

docstring missing

The current julia documentation fails to build: JuliaImages/juliaimages.github.io#44 (comment)

When building Images.jl Documentation https://github.com/JuliaImages/juliaimages.github.io with Documenter v0.22, there'll be lots of unresolved links if using @docs macro in function_reference.md

```@docs
glcm
...
``` #

and glcm doesn't have its docstring

help?> glcm
search: glcm glcm_prop glcm_norm glcm_var_ref glcm_entropy glcm_mean_ref glcm_symmetric glcm_var_neighbour

  No documentation found.

To supprese the no-doc-found warnings, I temporarily changed @docs to julia in JuliaImages/juliaimages.github.io#50

The following is a list of undocumented methods

  • glcm
  • glcm_symmetric
  • glcm_norm
  • glcm_prop
  • max_prob
  • contrast
  • ASM
  • IDM
  • glcm_entropy
  • energy
  • dissimilarity
  • correlation
  • glcm_mean_ref
  • glcm_mean_neighbour
  • glcm_var_ref
  • glcm_var_neighbour
  • lbp
  • modified_lbp
  • direction_coded_lbp
  • lbp_original
  • lbp_uniform
  • lbp_rotation_invariant
  • multi_block_lbp

cc: @timholy

Images is downgraded

Currently I get downgraded to Images v0.23.3 from 0.24.1 if I use ImageFeatures v0.4.4

Are there issues with the new Images release?

Is it on the roadmap to support 1.0 in the near feature

Right now it doesn't work with 1.0, yet I don't see much activity here. Any plan to support 1.0?

julia> using ImageFeatures
[ Info: Precompiling ImageFeatures [92ff4b2b-8094-53d3-b29d-97f740f06cef]
ERROR: LoadError: LoadError: syntax: invalid escape sequence
Stacktrace:
 [1] include at ./boot.jl:317 [inlined]
 [2] include_relative(::Module, ::String) at ./loading.jl:1038
 [3] include at ./sysimg.jl:29 [inlined]
 [4] include(::String) at /home/gzhang8/.julia/packages/ImageFeatures/0YxwE/src/ImageFeatures.jl:3
 [5] top-level scope at none:0
 [6] include at ./boot.jl:317 [inlined]
 [7] include_relative(::Module, ::String) at ./loading.jl:1038
 [8] include(::Module, ::String) at ./sysimg.jl:29
 [9] top-level scope at none:2
 [10] eval at ./boot.jl:319 [inlined]
 [11] eval(::Expr) at ./client.jl:389
 [12] top-level scope at ./none:3

FREAK & BRISK Descriptors

FREAK Roadmap

  • Freak Best Pairs
  • Orientation Pairs
  • Mean Intensity
  • Orientation of Patch
  • Sample Pairs for Descriptor
  • Modified Hamming Distance calculator (calculates distance in parts)

BRISK Roadmap

  • Need AGAST corners
  • Orientation (Long & Short Pairs)
  • Sample Pairs
  • Keypoint Maxima

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Improving Hog allocations / Making a fast HOG version

After looking carefully at the HOG code and comparing the time needed to perform pedestrian detection in an image (I found it too slow) I wanted to propose another version to be as fast as possible (even though it might differ a little bit from the first proposed version in academia). It would be nice to actually benchmark both versions in terms of F-score in a "benchmark task". In any case probably some of the ideas of the proposed version might be reused to rethink some parts of the current HOG version which could yield faster compute times.

After some discussion with @zygmuntszpak on slack I will start outlining the different components needed to implement the standard HOG:

  1. Divide window into adjacent non-overlapping cells of size 8 x 8 pixels.
  2. For each cell, compute a histogram of the gradient orientations binned into B bins.
  3. Group the cells into overlapping blocks of 2 x 2 cells (so each block has 16 x 16 pixels).
  4. Concatenate the four cell histograms in each block into a single block feature, and normalize the block feature by its Euclidean norm.
  5. The resulting HOG feature is the concatenation of all blocks features within a specified window (eg 128 x 64).

Currently the HOG in the code does this process for a given input image and HOG() struct. This has a basic problem faced by users when they want to use the descriptor in a 'big' image to perform object detection. This problem is a mix of redundant computations of histograms (in case there are overlapping windows) as well as a lot of allocations (since for each window there are several arrays that are created: for gradients (in x and y coordinates) for magnitudes and for orientations).

Fast HOG version 1

  1. Same
  2. Same
  3. Skip
  4. Skip
  5. Resulting HOG is view of the cell features with a specified window

Why skipping 3 and 4 ?
Well, if we do not normalize the histograms, it seems a bit odd to need the blocks since we would end up with the exact same histogram cells copied in different blogs (seems quite a lot of redundant information, when normalized it makes sense since the normalization factor changes the "redundant cells").

I will tell the array made by the histograms a Hogmap. Which might look like this:

C_11  C_12  C_13  C_14 ...
C_21  C_22  C_23  C_24 ...
C_31  C_32  C_33  C_34 ...
...

Where C_ij corresponds to a histogram with B bins.

Hei but this is not a HOG!

Well it is descriptor made with histograms of oriented gradients. It's just not normalizing different block regions in order to get faster computes. I would like to test if there is a real high penalty in performance. When the original HOG was proposed no one (as far as I am aware) used to grow the train sets online. We could do it to have samples with different illuminations in different regions, allowing the learning algorithm to learn to be invariant under such events without us needing the descriptor to make local normalizations.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.