Giter Site home page Giter Site logo

nsfw-filter / nsfw-filter Goto Github PK

View Code? Open in Web Editor NEW
1.4K 27.0 157.0 50.04 MB

A free, open source, and privacy-focused browser extension to block “not safe for work” content built using TypeScript and TensorFlow.js.

Home Page: https://nsfw-filter.com

License: GNU General Public License v3.0

JavaScript 19.05% TypeScript 80.48% HTML 0.47%
chrome-extension google-chrome-extension nsfw-recognition nsfw-classifier nsfw-filter nsfw-detection hacktoberfest tensorflow tensorflowjs typescript

nsfw-filter's People

Contributors

allcontributors[bot] avatar anonacc avatar crittermike avatar dependabot[bot] avatar freshmre avatar govza avatar pottekkat avatar sh13n avatar silloi avatar yzevm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nsfw-filter's Issues

Remove redundant permissions from manifest.json

Expected Behavior

Only require permissions that are necessary.

Current Behavior

Some permissions are redundant and are not being used.

Possible Solution

Check all the permissions in the manifest and check if they are being used and are really necessary.

Context

Redundant permissions without any uses could get the extension removed from the Chrome Webstore.

version: v1.0.0

Something wrong with the notion.so

Bug description

When I open the www.notion.so, many important elements in the pages are not loaded.

Expected result

Snipaste_2020-09-05_21-18-09

Actual result

Snipaste_2020-09-05_21-19-06

Your Environment

  • Version used:1.2.0
  • Browser Name and version: Edge 85.0.564.44
  • Operating System and version: Windows 10 home Edition 1909
  • Link to your project: None

Add option for the user to choose how to filter the images

Expected Behavior

The user should be able to select how the image should be filtered from the following options:

  • Replace with an image from Unsplash
  • Blur image
  • Replace image with custom image
  • Add watermark to the image
  • Do nothing/ turn off filter

Current Behavior

Currently, if the images are found to be NSFW, they are automatically replaced by images from Unsplash.

Possible Solution

Add a popup.html file for the popup and add different options.

Based on this option change how the image would be processed in content.js

How to know when the filter is done displaying the SFW images?

Is your feature request related to a problem? Please describe.

It's not really a feature request and not a bug either, but here's my issue/question.
I'm using this extension in tandem with VBA and Selenium Webdriver in Chrome (using the newly released extension on the store) to make sure I'm not taking into account nsfw images when running my macro. The issue is that it usually takes the filter maybe 0.25 to 0.5 seconds to un-hide the images that are safe for work (i guess to change the visibility attribute), and i'm trying to optimize the macro duration, and I can't find an optimized way to detect when all sfw images have been made visible and not just some of them.
I have a bit of hacky code rn that checks if the number of visible items has changed after a second and loops if it has until it hasn't, but is there maybe a variable or something that the code uses that I could check to see if it's done re-displaying the safe images?

If there isn't, that's fine, I just wanted to make sure because if there is that'd be great.

(To the people that helped me for my last issue, sorry for not replying last time, I don't check github very often.)

Thanks! :)

Blocked Images Counter

Is your feature request related to a problem? Please describe.

There is currently little feedback to the user as to how many/where modifications are made to the page.

Describe the solution you'd like

A counter to show the number of blocked images.
Example: uBlock Origin shows the number of items on the page blocked.
image

Describe alternatives you've considered

-

Additional context

Would work well in combination with the watermark feature proposed in #2

Filter loading lag

When open pages with nsfw contents, the images are visible initially and then filtered out.

Expected Behavior

Contents should be filtered out immediately and not visible.

Current Behavior

Small lag between images being visible and filtered. When the page is first loading, all unsafe contents are visible and after a few seconds, all images are filtered out.

Possible Solution

Steps to Reproduce (for bugs)

  1. Go to a page with nsfw contents.

Create design for popup page

It will be great to have a design in Figma.com so we can quickly edit future changes

What should be in the popup?

  1. Logo, NSFW Filter title
  2. Bar to adjust the strictness of filtering #58
  3. Dropdown menu to select type how to filter images: a watermark(cover NSFW image with logo), hidden or blur #2
  4. Counter for total filtered images and videos #78
  5. Some text and links to our website NSFW-filter.com or GitHub

Some examples

image
image
image
image

Add caching for recently filtered images

When I go to some page filter check images, after that if I return to the same page, the filter will check images again
So it would be great if we start caching images URLs and prediction result for recently visited pages

Lets store links and true/false results in Map on content script side. Max Map size is 100-200 URLs, Cache type LRU

Some issues when installing to chrome

See below.

Expected Behavior

It should not have the below errors in the Chrome extensions manager.

Current Behavior

Chrome lists these issues after loading the unpacked dist directory

Overriding the gradient for 'Max'
Overriding the gradient for 'OneHot'
Overriding the gradient for 'PadV2'
Overriding the gradient for 'SpaceToBatchND'
Overriding the gradient for 'SplitV'

Possible Solution

n/a

Steps to Reproduce (for bugs)

  1. Install using repo's instructions on Chrome for Mac v84.0.4147.89

Context

As a result, no NSFW images are blocked.

Your Environment

  • Version used: v0.2.2-beta
  • Browser Name and version: 64-bit Chrome for Mac v84.0.4147.89
  • Operating System and version: macOS Cataline v10.15.4
  • Link to your project: n/a

Handle unsafe content in <video> html tag

Expected Behavior

Block unsafe content in <video /> tag

Current Behavior

Nothing, just showing unsafe content

Steps to Reproduce (for bugs)

  1. Go to any website with unsafe videos

Improve handling of dynamically rendering DOM elements

In cases of websites where JS dynamically render DOM elements, we should somehow notice about new elements and filter them
How it works now - we just set timeout line in code

Expected Behavior

should notice when new DOm element was added and filter it

Current Behavior

Extension Ignores new DOM elements

Possible Solution

Probably related solution of it

Steps to Reproduce (for bugs)

  1. Go to unsafe website in README and after page loading change setting to only pics

Add timeout to analyze image

Bug description

When we send image to backgound script, we wait until its done. But what if it takes too long? 30 seconds? 1 minute?
Lets set 10 sec timeout for this case. Use Promise.race to get it done

Expected result

Timeout after 10second of analyzing

Actual result

Waiting as long as image is predicting

V1.0.0 Release

V1.0.0 is now officially complete. Let's look into how we can share this project with a more broad audience to get more awesome ideas and features. Here are some of the things I would like to check:

  • Check documentation
  • Manual testing
  • Social media posts
  • Releasing on Firefox and Chrome addons store

Change way of handle lazy load images

Properly handle lazy load in code

Expected Behavior

mark as safe content

Current Behavior

mark as safe content but check in every scroll

Possible Solution

handle img data-original attribute instead of classNames

Steps to Reproduce (for bugs)

  1. Load any safe website with load or lazy in url

taken this

Qst

Hey!
I wanna ask if this extension can work in incognito window and in Porn website?

v1.3.0 Roadmap

v1.3.0 Roadmap

nsfw-filter

nsfwjs

Progress https://github.com/orgs/nsfw-filter/projects/1

Finish date: November 25

UPD1 Not sure that we should filter gif images. Webpage sends gif url to background worker and after that it loads gif image, depends on size, but approx it takes 0.5-4 seconds(and ~50-70ms to filter) for 1 gif image. So I decided to remove gif filtering in v1.3.0

Add an alert box when nsfw images found

Expected Behavior

It should show an alert box when nsfw images detected.

Current Behavior

No alert. Extension will automatically filter nsfw images.

Edit

What @abhirammltr is talking about is that when the user enters a website, we check the images in the website as soon as it loads and gives an alert to the user saying that we have found NSFW content on it. The user could go back or continue to the website.

failed to hide images

Bug description

failed to hide all nsfw images

Expected result

all nsfw images should be hidden

Actual result

some nsfw images were not hidden

Steps to reproduce

  1. go to bing.com
  2. search for "nsfw"
  3. click images
  4. turn off safe search
  5. scroll down to find images that were not properly hidden by the extension

Context

i like the idea of the extension and would like it to be improved

Your Environment

  • Version used: 1.0
  • Browser Name and version: chrome 85
  • Operating System and version: windows 10
  • Link to your project: n/a

Ability to change models somewhat easily?

Is your feature request related to a problem? Please describe.

Could someone explain how I (or anyone else) could change the model the extension uses?
I have a pretty decent computer and I've found I get way better results with the "HUGE" Inception V3 Model than with the normal one when I use https://nsfwjs.com , especially when I try to filter smaller pictures. So I'd expect to also get better results if I could get the extension to use it as well.

Is there a somewhat simple way to do this? I'd like to try, anyway.
(This sort of information could also be put on the front page / Readme for the extension by the way, i'm sure other people could have this question in the future.)
Thanks!

README suggestions

Here are some changes I would suggest to the README. @YegorZaremba please check all that you think should be made and connect with me on the Slack Channel.

  • Add "Featured in Product Hunt" image- I think this is necessary because people will easily notice this and being featured makes this project more reputable.

  • Change the "NSFW Filter" text image- I will make a design with our original fonts. Also aligning this to the left would look good.

  • Make the shields badges flat-square

  • Improve the opening paragraph- The first lines should be made better.

  • Remove bold where it is not necessary

  • Download image quality improvement- I will make the GIFs in a better resolution.

  • Show the license at the bottom of the README- I think this is a good practice.

  • Improve the activity diagram- The activity diagram is not clear.

I will make the changes as necessary and make the PR.

Increase performance

Expected Behavior

Processing images faster

Current Behavior

Slow, buggy for human eye

Possible Solution

I see 2 bottlenecks:

  1. We have 3 prepared method, better to combine in one loop
  2. Probably better to use DOMContentLoaded event instead of load line in code

Steps to Reproduce (for bugs)

  1. Go to test website in README

Context

Your Environment

  • Version used: v0.2.3-beta
  • Browser Name and version: Version 84.0.4147.89 (Official Build) (64-bit)
  • Operating System and version: Ubuntu 20.04 LTS
  • Link to your project: n/a

Refactor lazy loading logic

Bug description

We use redundant logic. Collect all urls in image.dataset https://github.com/nsfw-filter/nsfw-filter/blob/master/src/content/Filter/ImageFilter.ts#L86 and send to background worker, if image.src is non NSFW, we analyze all urls in dataset

Expected result

When image is changed src(google images, bing images) attribute we should predict new src url. We can use observer to catch changes https://github.com/nsfw-filter/nsfw-filter/blob/master/src/content/DOMWatcher/DOMWatcher.ts#L89

Actual result

Setup timer to waiting when src will be updated https://github.com/nsfw-filter/nsfw-filter/blob/master/src/content/Filter/ImageFilter.ts#L49 and analyze new url

Which nsfwjs model does this extension use? What is the NSFW % Threshold? (If there is one)

Does anyone know which model (from https://github.com/infinitered/nsfwjs ) this extension uses? Because on https://nsfwjs.com/ there are three models, each with different results (the larger "huge" one being more accurate from what i can tell, even if it says it's the same % accuracy).
And what's the threshold % for being NSFW? Is it different if it gets classified as "sexy", "porn" or any of the categories?
Overall, maybe adding a bit of detail about how it works to the description could be nice.
Thanks!

Rethink user experience of nsfw filter extension

Current Behavior

Browser load page, if we have some NSFW images on page, in most cases we should wait more than 0.5 second to analize, fetch unsplash images and DOM redrawing. I mean our users have already seen NSFW content, however was not expected for them

Possible Solution

How i see logic:

  1. When DOM loaded we should hide all images (f.e. css way - visibility: hidden), cause we don't know are images safe or not
  2. Analyze all images
  3. If image safe, change visibility to visible. If not safe leave hidden

Yeah, probably in some cases user still can see NSFW content for a while, but definitely time will decreased. Why I think its better? We increase performance due to not fetch random unsplash image and user doesn't wait(with NSFW content) so long to filter images

@navendu-pottekkat what do you think about it?

p.s. I can implement it with #9

Change License to MIT

I have no idea what's the difference between these licences, but most github repos use MIT License
Do we have any reasons to use GPL-3.0 License?

Reverse filtering ie. only nsfw images

Is your feature request related to a problem? Please describe.

Not a problem, just a thought. Following up on #2, would it be possible to add a toggle so the extension only shows NSFW content? That could come in handy to hide all the crap on some sites.

Describe the solution you'd like

An opt-in option to reverses the filtering ie if (nfsw)if (!nsfw). The rest remains untouched.

Describe alternatives you've considered

Status quo.

Some things are getting broken on Reddit and Twitter

All Reddit posts' content are hidden, except for images, videos and awards icons :

image

Twitter videos' controls are hidden

What's expected:
image
What I get:
image

  • Version used: 1.2.0
  • Browser Name and version: Firefox Developer Edition 81.0b6

Check image before the page is loaded

Expected Behavior

The model should be activated as soon as the web page is loaded. Possibly before anything is displayed on the screen.

Current Behavior

Currently, the model is activated when the user scrolls through a website. And there is a noticeable lag.

DeepFakes clean images

Is your feature request related to a problem? No.

Describe the solution you'd like

Usage of DeepFakes type changes to an image but to clothe people.

Describe alternatives you've considered

N/A

Additional context

N/A

Change icon

Update icons in the extension package and in product pages.

Extension is slowing down page load on https://pr0gramm.com/

Expected Behavior

  • The extension should not cause any lag and with our testing, it does not cause any lag. Need to test if this is a legitimate issue.

  • Smaller images should also be checked.

Current Behavior

One user tested the extension out on a German imageboard https://pr0gramm.com/ and found that using the extension is causing the page to load significantly slower.

Also, the smaller images on the page were unfiltered.

I did test it on Reddit and a German imageboard that lists it content in 128x128 thumbnails within a grid. The NSFW section is for members only though =/

That's the site where I had most problems with images not being blocked. So I guess the AI doesn't seem to be trained well enough for such small images. This was the site with the longest loading times, but that site would probably be the worst case for this extension. I think another reason for the long loading times is the XHRs that load the images on that site which also leads to the problem of images being displayed before they eventually are blocked.

Steps to Reproduce (for bugs)

Check the website mentioned in the issue.

Solutions

  • The extension should be checked if it is causing any slowing down.
  • #32 Should be fixed.

The extension mistakenly broke Notion and Reddit

Bug description

The extension makes Notion inaccessible and Reddit couldn't load properly

Actual result

Notion web app went blank, and some parts of the Reddit website are broken

Your Environment

  • Version used: 1.2.0
  • Browser Name and version: Firefox 81.0b8 (64-bit)
  • Operating System and version: Windows 10 (64-bit) OS Buil 19041.508

Redundant analyze of duplicates

Bug description

Lets say website has 2 or more images with same url, like avatar: main image, small image for comment section and etc. We analyze same image link twice or more. Lets get rid of redundant work to reduce time of filtering whole page

Expected result

Image analyzed twice

Actual result

Image analyzed once

Ability to choose filter strictness

Is your feature request related to a problem? Please describe.

No.

Describe the solution you'd like

The extension could have a setting which is a scale, for example 1-100, that allows you to set how strict the filter should be. I believe the process that classifies an image as NSFW also has a confidence rating on it so you could basically allow users to choose how confident they'd like it to be.

Describe alternatives you've considered

  • Allow users to mark an image on the page as NSFW, thus adding new training data to the model locally

Additional context

N/A

NSFW

I'm always frustrated when I want to report NSFW and if I show the screenshot the NSFW will be blank.

A solution is to Put an option where NSFW Content will have the "NSFW" in bold white.

Thanks.

Filter div with backgorund image

Bug description

If page has div element with background-image, we should filtered it too

<div style="background-image: url('https://example.com/image.jpg')"></div>

Expected result

Should be filtered

Actual result

Nothing is filtered

Add a landing page

Add a landing page for our extension. A simple one-page website should be enough.

Currently, the default README file in docs/ folder is used for creating the website.

Should we add this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.