• finitebanjo@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    8 hours ago

    Are there benefits to websites thinking your agent is a phone? I assumed phones just came with additional restrictions such as meta tags in the stylesheet, not like stylesheets matter at all to a scraper lol

  • MonkderVierte@lemmy.zip
    link
    fedilink
    arrow-up
    45
    ·
    edit-2
    1 day ago

    Just remeber that the captcha flood is because AI companies do rogue scraping. Be nice especially to little private sites.

  • handsoffmydata@lemmy.zip
    link
    fedilink
    arrow-up
    59
    ·
    2 days ago

    Local data hoarder who looks down on calls outside the network as obscenities. (Entire collection scraped more aggressively than tech bros training an AI model)

    • chaospatterns@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      22 hours ago

      I scrape my own bank and financial aggregator to have a self hosted financial tool. I scrape my health insurance to pull in data to track for my HSA. I scrape Strava to build my own health reports.

    • tetris11@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      22 hours ago

      postmarket OS tables because I was looking forna device that was unofficially supported but somehow not in their damn table

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      69
      arrow-down
      1
      ·
      2 days ago

      You can’t parse [X]HTML with regex. Because HTML can’t be parsed by regex. Regex is not a tool that can be used to correctly parse HTML. As I have answered in HTML-and-regex questions here so many times before, the use of regex will not allow you to consume HTML. Regular expressions are a tool that is insufficiently sophisticated to understand the constructs employed by HTML. HTML is not a regular language and hence cannot be parsed by regular expressions. Regex queries are not equipped to break down HTML into its meaningful parts. so many times but it is not getting to me. Even enhanced irregular regular expressions as used by Perl are not up to the task of parsing HTML. You will never make me crack. HTML is a language of sufficient complexity that it cannot be parsed by regular expressions. Even Jon Skeet cannot parse HTML using regular expressions. Every time you attempt to parse HTML with regular expressions, the unholy child weeps the blood of virgins, and Russian hackers pwn your webapp. Parsing HTML with regex summons tainted souls into the realm of the living. HTML and regex go together like love, marriage, and ritual infanticide. The <center> cannot hold it is too late. The force of regex and HTML together in the same conceptual space will destroy your mind like so much watery putty. If you parse HTML with regex you are giving in to Them and their blasphemous ways which doom us all to inhuman toil for the One whose Name cannot be expressed in the Basic Multilingual Plane, he comes. HTML-plus-regexp will liquify the n​erves of the sentient whilst you observe, your psyche withering in the onslaught of horror. Rege̿̔̉x-based HTML parsers are the cancer that is killing StackOverflow it is too late it is too late we cannot be saved the transgression of a chi͡ld ensures regex will consume all living tissue (except for HTML which it cannot, as previously prophesied) dear lord help us how can anyone survive this scourge using regex to parse HTML has doomed humanity to an eternity of dread torture and security holes using regex as a tool to process HTML establishes a breach between this world and the dread realm of c͒ͪo͛ͫrrupt entities (like SGML entities, but more corrupt) a mere glimpse of the world of reg​ex parsers for HTML will ins​tantly transport a programmer’s consciousness into a world of ceaseless screaming, he comes~~, the pestilent slithy regex-infection wil​l devour your HT​ML parser, application and existence for all time like Visual Basic only worse he comes he comes do not fi​ght he com̡e̶s, ̕h̵i​s un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment, HTML tags lea͠ki̧n͘g fr̶ǫm ̡yo​͟ur eye͢s̸ ̛l̕ik͏e liq​uid pain, the song of re̸gular exp​ression parsing will exti​nguish the voices of mor​tal man from the sp​here I can see it can you see ̲͚̖͔̙î̩́t̲͎̩̱͔́̋̀ it is beautiful t​he final snuffing of the lie​s of Man ALL IS LOŚ͖̩͇̗̪̏̈́T A*LL I​S LOST the pon̷y he come*s he c̶̮omes he co~~mes the ich​or permeates all MY FACE MY FACE ᵒh god no NO NOO̼*O​O NΘ stop the an​*̶͑̾̾​̅ͫ͏̙̤g͇̫͛͆̾ͫ̑͆l͖͉̗̩̳̟̍ͫͥͨ*e̠̅s ͎a̧͈͖r̽̾̈́͒͑en​ot rè̑ͧ̌aͨl̘̝̙̃ͤ͂̾̆ ZA̡͊͠͝LGΌ ISͮ̂҉̯͈͕̹̘̱ TO͇̹̺ͅƝ̴ȳ̳ TH̘Ë͖́̉ ͠P̯͍̭O̚​N̐Y̡ H̸̡̪̯ͨ͊̽̅̾̎Ȩ̬̩̾͛ͪ̈́̀́͘ ̶̧̨̱̹̭̯ͧ̾ͬC̷̙̲̝͖ͭ̏ͥͮ͟Oͮ͏̮̪̝͍M̲̖͊̒ͪͩͬ̚̚͜Ȇ̴̟̟͙̞ͩ͌͝S̨̥̫͎̭ͯ̿̔̀ͅ

    • UndercoverUlrikHD@programming.dev
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      I got a bot on lemmy that scrapes espn for sports/football updates using regex to retrieve the JSON that is embedded in the html file, it works perfectly so far 🤷‍♂️

    • MonkderVierte@lemmy.zip
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      Delete all line breaks, add one each <, get the data you want per line and you’re good.

      Of course, only for targeted data extraction. …which should be the default in any parser, really.

  • lime!@feddit.nu
    link
    fedilink
    arrow-up
    67
    ·
    2 days ago

    we’re in web 3.0 now, apis and data access are a thing of the past. so scraping it is!

    • einkorn@feddit.org
      link
      fedilink
      arrow-up
      70
      ·
      edit-2
      2 days ago

      Guess who recently asked a company if he could get access to the API they use to load stuff in their frontend from their backend and got told “Nope and btw scraping is against our TOS”?

      Well, if you won’t give it to me the info that you provide anyway the easy way, I can still take it the hard way. 🤷‍♂️

      • CompassRed@discuss.tchncs.de
        link
        fedilink
        arrow-up
        38
        ·
        2 days ago

        Maybe you should just try being lucky. I found a critical security vulnerability while working on my scraping project. I told them, they paid me and gave me written permission to scrape.

        • einkorn@feddit.org
          link
          fedilink
          arrow-up
          23
          ·
          1 day ago

          You are braver than I am because here in Germany usually people get sued for reporting security vulnerabilities.

          • EldenLord@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            22 hours ago

            I know a guy who did exactly that and got sued. The security failure he reported even was a Straftatbestand committed by the company and so he won the process. German companies really love shooting themselves in the foot.

            • bless@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              9 hours ago

              Over here, not just sued, but sued for extortion because they had the audacity to ask for bug bounty. Ok then, if I ever find a security hole that exposes sensitive data, filing a gdpr report it is

              • Victor@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 day ago

                But the technology is already there in place, and you get sued if you point out security flaws in it? Crazy.

                • einkorn@feddit.org
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  1 day ago

                  Yes, because any circumvention of any form of security, be it as useless as a hardcoded default password, is considered a crime in German law. So even the discovery of a security flaw puts you with one foot in jail, because technically you did something you are not supposed to.

  • undefined@lemmy.hogru.ch
    link
    fedilink
    arrow-up
    10
    ·
    1 day ago

    Ha, this reminds me of implementing “API” access in the shipping world for companies that only ship a 90s-style web portal.