Giter Site home page Giter Site logo

response-to-wei's Introduction

Response-To-WEI

response-to-wei's People

Stargazers

Jacek avatar

response-to-wei's Issues

About Web-Environment-Integrity

Mr. Weiss,

I'd like to respond to your comments in RupertBenWiser/Web-Environment-Integrity#131. As the repository is locked down to only those who have made previous contributions, this is the only method I have by which to respond.

Consider this my contribution to the Web Environment Integrity proposal. As you know, contributions come in many forms and shapes, from praise to suggestions to criticism. Sometimes the conversation comes from unexpected places, like the emotional place of distrust for a large amoral corporation like Google, or from the fact that Github links Issues from other repositories that reference your own. I see no reason to conflate unexpected and critical feedback with "spam." After all, your explainer specifically solicits input from the community and other stakeholders. I, as a User of the internet, who you reference in the very first sentence of the explainer, am a natural stakeholder for this proposal, as it will directly affect how I use the web. How then could any comments on this proposal, especially ones as well reasoned as Mr. Finkhaeuser's, be considered unsolicited spam?

Now that the concept of standing is out of the way, allow me to add my feedback.

The introduction of the proposal states:

Users often depend on websites trusting the client environment they run in. This trust may assume that the client environment is honest about certain aspects of itself, keeps user data and intellectual property secure, and is transparent about whether or not a human is using it. This trust is the backbone of the open internet, critical for the safety of user data and for the sustainability of the website’s business.

This seems to outright ignore a best practice of web development: Never Trust The Client. O'Reilly's HTTP Developer's Handbook says it best: "This is truly the golden rule of Web development." A website should never "assume the client is honest," which is why server-side validation exists. As software engineers working for Google, I'm sure you've heard this statement before, so you likely need no reminder about its ubiquity. This introduction seems to fundamentally and deliberately misconstrue this golden rule as pretextual justification for the proposal that follows. It invites skepticism and distrust, and the number of Issues filed against your proposal reflect that.

The potential use cases for this unwarranted "trust" are listed below and are equally full of flawed logic.

Users like visiting websites that are expensive to create and maintain, but they often want or need to do it without paying directly. These websites fund themselves with ads, but the advertisers can only afford to pay for humans to see the ads, rather than robots. This creates a need for human users to prove to websites that they're human, sometimes through tasks like challenges or logins.

It is a generalization and an assumption that websites only fund themselves with advertising. Wikipedia is a most prominent example that sustains itself through donations. There are many examples of social media, forums, blogs, artist websites, and others that use different business models: direct sales of merchandise, sponsorships, donations, and more. The largest websites use ads, yes, but it is a falsehood to assume all sites do, or must. In addition, advertisers are not exempt from the "Never Trust The Client" rule. Their inability to verify a viewer of their content is Human does not impose the responsibility of proving that statement onto the Client, be they Human or not.

The responsibility to prove Humanity (vis a vis Captcha) is almost never used in the context of viewing advertisements, but instead to communicate and participate in a social community, as you state in bullet point 2.

Users want to know they are interacting with real people on social websites but bad actors often want to promote posts with fake engagement (for example, to promote products, -- you mean like advertisements? -- or make a news story seem more important). Websites can only show users what content is popular with real people if websites are able to know the difference between a trusted and untrusted environment.

This bullet point does a lot to undermine its predecessor. You just claimed that the burden of proof is on the User, and yet here you state the Website has the need to prove the Humanity of its Users. This also misses the forest for the trees: Websites should Never Trust The Client, and so they should never even need to prove the Humanity of their Users. It should always be taken with a grain of salt. Content posted to a Website that comes from its Users extends the golden rule to the User; if the Website cannot verify the Humanity of its Users, and those Users post on that Website, then Users should Never Trust The Website. "Don't believe everything you read on the internet" is a classic phrase that reinforces the skepticism you should have when browsing the internet.

Users playing a game on a website want to know whether other players are using software that enforces the game's rules.

Never Trust The Client. If I'm playing a First Person Shooter and I tell the server I have 9000 bullets in my rifle, the server is responsible for validating my claim against known information it has on the rifle I'm carrying.

Users sometimes get tricked into installing malicious software that imitates software like their banking apps, to steal from those users. The bank's internet interface could protect those users if it could establish that the requests it's getting actually come from the bank's or other trustworthy software.

Never Trust The Website. If I'm running a malicious website or a malicious app that spoofs a well known bank, what stops me from implementing your API myself? What if I have no intention of moving money around, but just to collect the user's login info, which they may happily provide to my spoofed application? Your API does not protect against this use case whatsoever.

Web Environment Integrity does not prescribe a list of specific attesters or conditions the attesters need to meet to become an attester. Browsers should publish their privacy requirements for attesters, and allow websites to evaluate each attester’s utility on its own merit. Users should also be given the option to opt out from attesters that do not meet their personal quality expectations.

What stops a bad actor, a rogue nation-state, etc. from creating a malicious attester that steals user information? What stops me from creating attester that always returns a signed Human flag, even to my bot farm?

While attestation tokens will not include information to identify unique users, the attestation tokens themselves could enable cross-site tracking if they are re-used between sites. For example, two colluding sites could work out that the same user visited their sites if a token contains any unique cryptographic keys and was shared between their sites.

This is a continuation of data harvesting schemes that many users are distrustful of, and righteously angry about. It creates a frictionless Google ecosystem, while adding friction to the rest of the web.

Let's get to the heart of the misunderstanding here: the concept of Trust. Trust is a human idea. Trust takes a long time to build: a series of small acts of kindness, or repeated shows of internal consistency, or upholding promises. Building rapport. Gaining a reputation. Setting boundaries and establishing paths for recourse and redress. Some people build trust quickly, some are guarded and so they build trust slowly. But Trust can destroyed swiftly, and absolutely.

What this proposal aims to do, really, is model Trust, to map the human psychological or sociological concept onto computers. However, the proposal does not have the shape or behavior of Trust. In this proposal, Trust is built swiftly: a one time dump of as much personally identifying information as possible to a predestined Attester. An opaque, preordained machine, accountable not to its Clients but to its Owners. And not even its Owners (i.e. Google) are held accountable to its Clients (i.e. the Users of the internet).

This fundamental misunderstanding is why you are receiving so much backlash, and why you will find few allies to help you build this idea into a functioning system. For years, Google has been alienating the web, burning through trust and goodwill that took decades to build. Youtube's algorithm boosts disinformation, allows bots and trolls to thrive, and creates a haven for content farms targeting vulnerable people. Google's search results have been kneecapped to make room for advertisements and induce consumerism ("promote products"), reducing people's ability to find important niche information on the web. Stadia was dead on arrival because nobody trusted Google to support it after 4 years, a prediction that was by and large extremely accurate. It's why nobody expects your Non-Goals to be upheld, instead they are viewed as a tacit admission of future plans for expansion.

So your "Request for Feedback" rings hollow, and your behavior of locking down the comments section, stifling discourse and dissent, only proves the dissenters correct. For all it matters, you may as well make the proposal a private repository and ask nobody for anything.

Here's my conclusion: abandon this proposal. Maybe send your resumes out to other tech companies? Surely your talents can be used to make the web -- and the world -- a better place, rather than make Google a few billions more.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.