View Poll Results: IS this site still giving YOU problems?
Yes, it's still awful
46
46.94%
Yes, but I'm now resigned to it/it's acceptable to me.
19
19.39%
No, it's fine.
33
33.67%
Voters: 98. You may not vote on this poll
Two weeks later: IS the site still giving you problems? Vote now.
#31
Unmapped 12.4s @ 105
iTrader: (29)
Joined: Apr 2005
Posts: 11,777
Likes: 4
From: Newcastle. 330bhp-289lb/ft @ 1bar boost - 12.4s @ 105mph
Hit and miss, to be honest.
Sometimes it's all fine and dandy, fast loading pages etc.
Others, it's slower than a rotting corpse doing the marathon.
What I will say though, I do put some of it down to my sh1te ISP.
Sometimes it's all fine and dandy, fast loading pages etc.
Others, it's slower than a rotting corpse doing the marathon.
What I will say though, I do put some of it down to my sh1te ISP.
#33
Please post your screenshots on the other topic if your having problems fellas, the more screenshots we get from you the easier we can pinpoint the issues. I know its infuriating for many of you but try and note that the problems have gone for most people and again I must reiterate that these problems are caused by a 3rd party's malicious attacks, NOT IB.
If these problems are still being blamed on a 3rd party D-O-S attack, can I enquire as to why your team have not simply blocking repeat subsequent requests received from the same address which fall within the same 1 or 2 seconds - that's the standard approach I believe, and that should easily be resolvable within the time frame during which this issue has manifested itself.
Such a fix can be implemented in code or at the web server level using filters. I'd be happy to try and offer some advice if your team are having issues pinpointing how to prevent this attack.
If over 40% of respondents in the latest 2 polls are saying performance is making the site unusable it means your statement that problems being "gone for most people" doesn't appear to be holding water.
If you then consider that out of 87 people, only 32 say it's fine, then that figure should be worrying you and your team.
We all need to work together to get this fixed, and I'll happily forward any fiddler or dynatrace reports I can put together if I see slow traffic.
Cheers.
Last edited by MrNoisy; 14 December 2009 at 01:29 PM.
#34
I'm still sending screenshots to Stu, so they must be doing SOMETHING.
Seems to load individual pages quicker now, with the occasional stutter, but responding to a post is Veeeeerrrrryyyyy slow, and hit or miss. 3s or 30s
Seems to load individual pages quicker now, with the occasional stutter, but responding to a post is Veeeeerrrrryyyyy slow, and hit or miss. 3s or 30s
#35
Hi Stu,
If these problems are still being blamed on a 3rd party D-O-S attack, can I enquire as to why your team have not simply blocking repeat subsequent requests received from the same address which fall within the same 1 or 2 seconds - that's the standard approach I believe, and that should easily be resolvable within the time frame during which this issue has manifested itself.
Such a fix can be implemented in code or at the web server level using filters. I'd be happy to try and offer some advice if your team are having issues pinpointing how to prevent this attack.
If over 40% of respondents in the latest 2 polls are saying performance is making the site unusable it means your statement that problems being "gone for most people" doesn't appear to be holding water.
If you then consider that out of 87 people, only 32 say it's fine, then that figure should be worrying you and your team.
We all need to work together to get this fixed, and I'll happily forward any fiddler or dynatrace reports I can put together if I see slow traffic.
Cheers.
If these problems are still being blamed on a 3rd party D-O-S attack, can I enquire as to why your team have not simply blocking repeat subsequent requests received from the same address which fall within the same 1 or 2 seconds - that's the standard approach I believe, and that should easily be resolvable within the time frame during which this issue has manifested itself.
Such a fix can be implemented in code or at the web server level using filters. I'd be happy to try and offer some advice if your team are having issues pinpointing how to prevent this attack.
If over 40% of respondents in the latest 2 polls are saying performance is making the site unusable it means your statement that problems being "gone for most people" doesn't appear to be holding water.
If you then consider that out of 87 people, only 32 say it's fine, then that figure should be worrying you and your team.
We all need to work together to get this fixed, and I'll happily forward any fiddler or dynatrace reports I can put together if I see slow traffic.
Cheers.
Since doing these things we have been able to pin-point further improvements that we can make.
One thing we are doing currently is building a new database server and virtual machine structure (a hypervisor) for the site, as since we have been able to rectify the issues that were being caused by the DDOS attack, this has been our current "weakest link in the chain".
Other things we are doing:
Testing the changing of the hosting of our images (you will see this currently as ibsrv, a gzipped, aggressive cached hosting system)
Alternative hosting of 3rd party pixels/images/scripts (jbslsr, postrelease etc.)
Last edited by IB Adrian; 15 December 2009 at 03:22 AM.
#36
Let's hope it keeps getting better for you and this all gets sorted out.
#37
TTo the best of my knowledge - the attacks have abated, and we have been able to mitigate its affect on the sites accessibility through doing a number of things (some which you mentioned). We increased our overall Firewall capacity by 300% for example as well.
Since doing these things we have been able to pin-point further improvements that we can make.
One thing we are doing currently is building a new database server and virtual machine structure (a hypervisor) for the site, as since we have been able to rectify the issues that were being caused by the DDOS attack this has been our current "weakest link in the chain".
Other things we are doing:
Testing the changing of the hosting of our images (you will see this currently as ibsrv, a gzipped, aggressive cached hosting system)
Alternative hosting of 3rd party pixels/images/scripts (jbslsr, postrelease etc.)
Since doing these things we have been able to pin-point further improvements that we can make.
One thing we are doing currently is building a new database server and virtual machine structure (a hypervisor) for the site, as since we have been able to rectify the issues that were being caused by the DDOS attack this has been our current "weakest link in the chain".
Other things we are doing:
Testing the changing of the hosting of our images (you will see this currently as ibsrv, a gzipped, aggressive cached hosting system)
Alternative hosting of 3rd party pixels/images/scripts (jbslsr, postrelease etc.)
If you're looking at caching images, scripts etc gzip is good, but also ensure you make use of ETags, expiry headers and similar technologies to ensure that where possible the server sends back the standard "download not required" response. ETags are great if you can get them pushed out with the response headers.
I would assume you're probably using a clustered apache or similar farm as opposed to IIS to host a site of this size, which will probably make this sort of stuff easier to implement!
Obviously, minification and obsfucation will help too if you don't already do that, purely from an initial download perspective.
On top of that, one idea is to take the repeated hits from the same IP address one step further by logging consecutive hits by one address on a session / db basis, and when a predetermined "peak" number is reached that signifies an attack instead of browsing, just start blocking that IP, either by rejecting the HTTP request OR sending them to a static page (I'd guess the former would be preferable).
You could store the list in a cache or similar fast retrieval mechanism and update it / query it on the fly when requests come in.
This way, you'd have a relatively intelligent learning platform which could adapt in the event that the attacker switched IP.
Just an idea
Last edited by IB Adrian; 15 December 2009 at 03:23 AM.
#38
A lot of what you are referring to is above my head technically (I am not the tech department ), but I have passed your post onto those that are, I am pretty sure that our tech team is already using a lot of the technology you are referring to, but reminding them can't help
#39
All stuff served off ui.ibsrv.net, avatars.ibsrv.net, and images.ibsrv.net basically follows the various http optimization rules yahoo listed out years ago. You actually want to avoid sending an ETag header if and only if you're doing the other best practices for content to be for-ever cached:
- send far-future expires header
- send last-modified header comfortably in the past
Also:
- All JS and CSS is minified *and* gzipped.
- connections are kept alive.
- ui + avatars + images are 3 separate hosts that allow for more parallelism in downloading resources so your browser is able to establish up to 6 simultaneous connections. This has to be balanced with less DNS lookups, but we've measured 3 hosts is a decent compromise.
IBsrv's our in-house acceleration solution so we could load pages faster following all those best practices. We deploy this as a vBulletin Product on the top sites we run
- send far-future expires header
- send last-modified header comfortably in the past
Also:
- All JS and CSS is minified *and* gzipped.
- connections are kept alive.
- ui + avatars + images are 3 separate hosts that allow for more parallelism in downloading resources so your browser is able to establish up to 6 simultaneous connections. This has to be balanced with less DNS lookups, but we've measured 3 hosts is a decent compromise.
IBsrv's our in-house acceleration solution so we could load pages faster following all those best practices. We deploy this as a vBulletin Product on the top sites we run
Thanks for the update! Sounds like you are investigating a whole multitude of technologies.
If you're looking at caching images, scripts etc gzip is good, but also ensure you make use of ETags, expiry headers and similar technologies to ensure that where possible the server sends back the standard "download not required" response. ETags are great if you can get them pushed out with the response headers.
I would assume you're probably using a clustered apache or similar farm as opposed to IIS to host a site of this size, which will probably make this sort of stuff easier to implement!
Obviously, minification and obsfucation will help too if you don't already do that, purely from an initial download perspective.
On top of that, one idea is to take the repeated hits from the same IP address one step further by logging consecutive hits by one address on a session / db basis, and when a predetermined "peak" number is reached that signifies an attack instead of browsing, just start blocking that IP, either by rejecting the HTTP request OR sending them to a static page (I'd guess the former would be preferable).
You could store the list in a cache or similar fast retrieval mechanism and update it / query it on the fly when requests come in.
This way, you'd have a relatively intelligent learning platform which could adapt in the event that the attacker switched IP.
Just an idea
If you're looking at caching images, scripts etc gzip is good, but also ensure you make use of ETags, expiry headers and similar technologies to ensure that where possible the server sends back the standard "download not required" response. ETags are great if you can get them pushed out with the response headers.
I would assume you're probably using a clustered apache or similar farm as opposed to IIS to host a site of this size, which will probably make this sort of stuff easier to implement!
Obviously, minification and obsfucation will help too if you don't already do that, purely from an initial download perspective.
On top of that, one idea is to take the repeated hits from the same IP address one step further by logging consecutive hits by one address on a session / db basis, and when a predetermined "peak" number is reached that signifies an attack instead of browsing, just start blocking that IP, either by rejecting the HTTP request OR sending them to a static page (I'd guess the former would be preferable).
You could store the list in a cache or similar fast retrieval mechanism and update it / query it on the fly when requests come in.
This way, you'd have a relatively intelligent learning platform which could adapt in the event that the attacker switched IP.
Just an idea
#41
All stuff served off ui.ibsrv.net, avatars.ibsrv.net, and images.ibsrv.net basically follows the various http optimization rules yahoo listed out years ago. You actually want to avoid sending an ETag header if and only if you're doing the other best practices for content to be for-ever cached:
- send far-future expires header
- send last-modified header comfortably in the past
Also:
- All JS and CSS is minified *and* gzipped.
- connections are kept alive.
- ui + avatars + images are 3 separate hosts that allow for more parallelism in downloading resources so your browser is able to establish up to 6 simultaneous connections. This has to be balanced with less DNS lookups, but we've measured 3 hosts is a decent compromise.
IBsrv's our in-house acceleration solution so we could load pages faster following all those best practices. We deploy this as a vBulletin Product on the top sites we run
- send far-future expires header
- send last-modified header comfortably in the past
Also:
- All JS and CSS is minified *and* gzipped.
- connections are kept alive.
- ui + avatars + images are 3 separate hosts that allow for more parallelism in downloading resources so your browser is able to establish up to 6 simultaneous connections. This has to be balanced with less DNS lookups, but we've measured 3 hosts is a decent compromise.
IBsrv's our in-house acceleration solution so we could load pages faster following all those best practices. We deploy this as a vBulletin Product on the top sites we run
#43
Other things we are doing:
Testing the changing of the hosting of our images (you will see this currently as ibsrv, a gzipped, aggressive cached hosting system)
Alternative hosting of 3rd party pixels/images/scripts (jbslsr, postrelease etc.)
Testing the changing of the hosting of our images (you will see this currently as ibsrv, a gzipped, aggressive cached hosting system)
Alternative hosting of 3rd party pixels/images/scripts (jbslsr, postrelease etc.)
Does THAT mean owt?
Mine is hit or miss again today, 3-10 click to get a page to load, and ALWAYS times out when posting a reply, even though 9/10 HAVE posted
Posting via Quick reply is usually OK........
Other things: It's almost impossible to get into my notifications other than repeat clicking on it: opening a PM via the window that pops up just refuses point blank to work
Thread
Thread Starter
Forum
Replies
Last Post