Are these strange URLs that are being loaded to access our site hackers or what?

I wouldn’t try to react against a bad request as you’ll never be able to accommodate every possibility.

Rather, you’d be better to assert a valid request.

if the correct url should be

/news/show_selected_article.php?article=12345

Then the code in show_selected_article.php should confirm that the value of $_GET[‘article’] is indeed numeric and reject everything else.

Even something as simple as casting the article parameter to an integer will give you some protection, if indeed the article parameter is supposed to be an integer.

$articleId = (int) $_GET['article'];

Hi,

That idea would not totally work, since they would issue a HACK URL like this:

/news/show_selected_article.php?article=999999+%2f**%2fuNiOn%2f**%2faLl+%2f**%2fsElEcT+0x393133353134353632312e39,0x393133353134353632322e39,0x39313335313435

which is one of the URLs that we see often being used by HACKers

So really, I know what you are saying with:
"I wouldn’t try to react against a bad request as you’ll never be able to accommodate every possibility. "
but with that said, dont you think that checking for: “SeLeCt+” all in lower case, is a very good idea? I mean no legit URL, at least of ours, has that in it.

If you have:

Then echo (int) $_GET['article']; will give you “999999” as the result (SQL stuff all gone :slight_smile: )

Of course, that id is out of bounds since they’re looking for an invalid article id to test your error handling too (using a valid id would just deliver the article, I presume)

Checking for SQL keywords would serve no purpose when all you really want is a valid numeric id (I’m still assuming that it’s a numeric id that you’re after here)

Why not just wipe out all the bad stuff and only deal with what you’re expecting/ your application needs, instead of trying to test for the bad stuff?

Indeed. IMHO it is always better to go with a whitelist than it is to check a blacklist.

That is, accept only what you know to be acceptable rather than try to deal with all the possible variations of what might be unacceptable.

I would be tempted to use htaccess to redirect to another actual page and give a “permanently moved”

1 Like

@Mittineague said it better than I did

^^^ That especially.

FYI, we finally got Reply from Google as to this attack against us from Google Cloud. Here is the Answer:
"1) We will use this timestamp to conduct an investigation on this incident

  1. Regarding the previous attack, after investigation and analysis, we took

down that users projects and disabled their cloud privileges

  1. We need to perform our analysis on this and do an investigation to

determine what is the situation regarding this new activity. We do have

monitoring and rules in place to detect and catch abusive behavior.

However, it is always greatly appreciated whenever we receive escalation

requests so we can investigate and see where (if any) gaps we have

Rest assured that we are working on addressing attacks and abusive behavior

on the Google Cloud Platform and we take all escalations seriously and

perform a thorough investigation into the matter.

Sincerely,

The Google Cloud Platform Abuse Team"

What I want to know is:
How in GODs name does Google allow its servers to be used for such attack!

But anyway, at least they are taking steps to address the attacks.

I tend to build so a request never goes to query at least not directly. Building an array of valid article id’s is rather simple then compare request to this array. Depending on the size of your application you could also build an array of all articles within current category with the id as the primary array key, then check for that id to show the article so a request goes nowhere near a query.

RavenVelvet is on point. I’m not sure what the story is with who built whatever this site is but this all really web development 101. Developer should be competent enough to know about this stuff otherwise not building none-static websites at all.

I think you are missing the POINT.
The main Point is how in GODs name does Google allows its servers to be used for such attacks?

As far as code on our side to detect and block such attacks, even though giant Companies like Forbes, Target, etc. have been hacked by hackers, thus so much for your comment:
“but this all really web development 101”
we have never been brought down as they have.

But again that is not one of the main points of my questions.

It isn’t a search providers responsibility to filter requests for attacks. It is the developers responsible to know the various attacks within the context they are developing in and handle them appropriately. Something that looks like an attack in one context might be a false positive in another. Search providers can’t deduce that from client-side code alone without false positives for which people would be complaining. Not to mention that doesn’t prevent the attack from happening considering the URL can be hit from the command line on a local machine for which google has no control well unless the website is hosted on google hardware at least. I’m sure there are some providers out there that do exercise a certain level security beyond allowing all http requests to protect their servers but that not search engines.

Man dont be such Google ROBOT, think for your self a lil bit.
Really amazing how so many People think Google cant do wrong.
So:
1st, as you can see from the message I posted from Google ITSELF, they have admitted to the fact that the attack from us came from their servers, and they say that they disconnected that user
2nd, thee acts were not a simple URL in search results but was a program that was running on Google servers, which program was firing 1000s of URLs at our Web site per Hour

Jeez.

It isn’t that I believe Google can’t do wrong but more I believe it the engineers responsibility to put procedures in place to prevent damage from possible attacks.

Yes, there is NO debate about the fact that:
“engineers responsibility to put procedures in place to prevent damage from possible attacks.”
I agree with that.
And that is why our Site has had no Hacking while some of the Fortune 500 Sites like target, Forbes, etc. have.

But with that said, it is ALSO the responsibility of these Big Goliath Service providers like Google, to make sure that there servers are not being used for Attacking other servers. Specially in case of a Mega Goliath like Google that we have to assume is a Trusted site because of the near total Hegemony that Google has over what passes as the Internet.
That is all.

For the record. There is nothing to say it was google.

I could very easily change my user agent:

curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)');

To alot of sites I would now look like a googlebot, I also wouldn’t have to honour your robot.txt file if I didn’t want to.

DID YOU NOT READ THE prior messages!
Google sent a message Apologizing for the attack on our Site from their Servers and stated that they have closed that Users account, which message from Google I included in prior messages here.

There’s your mistake. What they should or what might be expected of them to be doing is niether guaranteed nor their responsibility

They said they took down the personal account. Google just provide cloud services: https://cloud.google.com

WÖW! That is the most twisted reading of law, certainly law of decency, imaginable!

Because according to your logic should someone go and buy a Billion Dollar building and rent out the units in this building and make Billions from it, as Google is making from its servers, and then some of the tenants in their building start throwing urine and fices on the people across the street from their building then it is not the responsibility of the building owner to control the tenants but the responsibility of the people in the street to avoid getting urine and fices thrown on them from this building, because according to you: “what might be expected of them to be doing is neither guaranteed nor their responsibility”

or if the people in the building start attacking people on the street or in the building across with Guns and shooting at them, it is the responsibility of the people in the street to avoid getting attacked and shot at from my this and not the building owner, because according to you: “what might be expected of them to be doing is neither guaranteed nor their responsibility”

I guess you best consult a lawyer

1 Like

What’s the best way to sanitize a url var like this?
I’m using pregreplace:-

if (isset($_GET["id"])){$id = preg_replace('#[^0-9]#i', '', $_GET["id"]) ;}

I keep finding in Analytics, URLs with a ’ on the end, after the id number, like this:-

example.com/mypage.php?id=123'

So it’s not clear what they are trying to do, unlike those obvious sql commands.
I’m not overly concerned, because whatever they are trying, doesn’t seem to be working, but I am curious about it.