Giter Site home page Giter Site logo

Comments (7)

tipiirai avatar tipiirai commented on May 26, 2024 1

Sounds like a legit idea. I was planning to implement clearer parsing/tokenization and rendering phases because there is a need for more customized highlighting per language.

I'm sorry this answer took so long. My mind has been occupied with the upcoming design system, but I'm planning to make a round of updates to Glow and Nuekit internals before launching it.

Thanks

from nue.

nobkd avatar nobkd commented on May 26, 2024 1

Related to "more customized highlighting per language": #197 (comment)

from nue.

fabiospampinato avatar fabiospampinato commented on May 26, 2024 1

I just released the "convenient" highlighter/tokenizer on top of Glow that I had in mind: https://twitter.com/fabiospampinato/status/1762965155841773879

Generally, FWIW, I really like this approach, and if more effort could be put into refining the syntax highlighter I think it could actually be pretty decent for a lot of use cases.

Some areas that IMO would be nice if they could be improved:

  1. Producing complete tokens, that cover every input character.
  2. Not producing unnecessary tokens, like the ones mentioned in the message above.
  3. Improving support for some languages nested inside other languages, like JS inside a <script> tag.
  4. Maybe special-casing more things, like rendering things that look like unary/binary/ternary operators with the accent color too.
  5. Refining keyword-detection to not consider a word to be a keyword if it comes right after a ..
  6. Detecting backtick-delimited strings as strings too.
  7. Possibly refining syntax highlighting for lots of other little edge cases.

IMO with relatively few tweaks it would be closer to the quality that TextMate can achieve in a lot more cases.


Example comparison I got, with Glow on the left and TextMate on the right:

Screen Shot 2024-02-28 at 22 11 23

Code I used for the example:

import shiki from 'shiki';

// Some example code

shiki
  .getHighlighter({
    theme: 'nord',
    langs: ['js'],
  })
  .then(highlighter => {
    const code = highlighter.codeToHtml(`console.log('shiki');`, { lang: 'js' })
    document.getElementById('output').innerHTML = code
  });

from nue.

tipiirai avatar tipiirai commented on May 26, 2024

@fabiospampinato there is a public parseRow method that now understands inline comments with the most recent commit. It will return an array of tokens in following format:

[
  { start: 0, end: 1, tag: "i", re: /[^\w \u2022]/g, },
  { start: 11, end: 18, tag: "em", re: /'[^']*'|"[^"]*"/g, is_string: true, }
  ...
]

Where start is the start index and end is the end index in the inputted string.

Hope this helps. This method only understands individual rows so it has no clue about multi-line comments.

from nue.

fabiospampinato avatar fabiospampinato commented on May 26, 2024

Nice thanks 👍 Are the tokens covering the entire input string? Like what should happen in that example between indexes 1 and 11?

from nue.

fabiospampinato avatar fabiospampinato commented on May 26, 2024

@tipiirai the new function is not exported from the entrypoint, could you fix this?

from nue.

fabiospampinato avatar fabiospampinato commented on May 26, 2024

@tipiirai the tokenization seems a bit wrong. With this code:

import {parseRow} from 'nue-glow/src/glow.js';

const code = "import shiki from 'shiki';";
const lang = "js";

const tokens = parseRow ( code, lang );

I get the following tokens:

{start: 0, end: 6, tag: 'strong', re: /\b(null|true|false|undefined|import|from|async|aw…l|until|next|bool|ns|defn|puts|require|each)\b/gi}
{start: 13, end: 17, tag: 'strong', re: /\b(null|true|false|undefined|import|from|async|aw…l|until|next|bool|ns|defn|puts|require|each)\b/gi}
{start: 18, end: 25, tag: 'em', re: /'[^']*'|"[^"]*"/g, is_string: true}
{start: 18, end: 19, tag: 'i', re: /[^\w •]/g}
{start: 24, end: 25, tag: 'i', re: /[^\w •]/g}
{start: 25, end: 26, tag: 'i', re: /[^\w •]/g}

Which are problematic because you can spot right away that there are 3 tokens wrapping around a single character, but our input string ends with shiki';, so there's no reasonable scenario where there would be the 3 length-1 tokens there at the end.

If I explicitly slice those ranges off from the input string I get this array:

['import', 'from', "'shiki'", "'", "'", ';']

So basically there are two tokens about apostrophes for the string that shouldn't exist 😢

from nue.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.