It is common knowledge that you can use the FormData class to send a file via AJAX as follows:
var DataToSend=newFormData(); DataToSend.append(PostVariableName, VariableData); //Send a normal variable DataToSend.append(PostFileVariableName, FileElement.files[0], PostFileName); //Send a file var xhr=newXMLHttpRequest(); xhr.open("POST", YOUR_URL, true); xhr.send(DataToSend);
Something that is much less known, which doesn't have any really good full-process examples online (that I could find), is sending a URL's file as the posted file. This is doable by downloading the file as a Blob, and then directly passing that blob to the FormData. The 3rd parameter to the FormData.append should be the file name.
The following code demonstrates downloading the file. I did not worry about adding error checking.
functionDownloadFile( FileURL, //http://... Callback, //The function to call back when the file download is complete. It receives the file Blob. ContentType) //The output Content-Type for the file. Example=image/jpeg { var Req=newXMLHttpRequest(); Req.responseType='arraybuffer'; Req.onload=function() { Callback(newBlob([this.response], {type:ContentType})); }; Req.open("GET", FileURL, true); Req.send(); }
And the following code demonstrates submitting that file
DownloadFile(DownloadURL, function(DownloadedFileBlob) { //Get the data to send var Data=newFormData(); Data.append(PostFileVariableName, DownloadedFileBlob, OutputFileName);
//Function to run on completion varCompleteFunction=function(ReturnData) { //Add your code in this function to handle the ajax result var ReturnText=(ReturnData.responseText ? ReturnData :this).responseText; console.log(ReturnText); }
//Normal AJAX example var Req=newXMLHttpRequest(); Req.onload=CompleteFunction; //You can also use "onreadystatechange", which is required for some older browsers Req.open("POST", PostURL, true); Req.send(Data);
//jQuery example $.ajax({type:'POST', url:PostURL, data:Data, contentType:false, processData:false, cache:false, complete:CompleteFunction}); });
Unfortunately, due to cross site scripting (XSS) security settings, you can generally only use ajax to query URLs on the same domain. I use my Cross site scripting solutions and HTTP Forwarders for this. Stackoverflow also has a good thread about it.
Phar files are PHP’s way of distributing an entire PHP solution in a single package file. I recently had a problem on my Cygwin PHP server that said “Unable to find the wrapper "phar" - did you forget to enable it when you configured PHP?”. I couldn’t find any solution for this online, so I played with it a bit.
The quick and dirty solution I came up with is to include the phar file like any normal PHP file, which sets your current working directory inside of the phar file. After that, you can include files inside the phar and then change your directory back to where you started. Here is the code I used:
So I was recently hired to set up a go-between system that would allow two independent websites to directly communicate and transfer/copy data between each other via a web browser. This is obviously normally not possible due to cross-site browser security settings (XSS), so I gave the client 2 possible solutions. Both of these solutions are written with the assumption that there is a go-between intermediary iframe/window, on a domain that they control, between the 2 independent site iframes/window. This would also work fine for one site you control against a site you do not control.
Tell the browser to ignore this security requirement:
For example, if you add to the chrome command line arguments “--disable-web-security”, cross-site security checks will be removed. However, chrome will prominently display on the very first tab (which can be closed) at the top of the browser “You are using an unsupported command-line flag: —disable-web-security. Stability and security will suffer”. This can be scary to the user, and could also allow security breaches if the user utilizes that browser [session] for anything except the application page.
The more appropriate way to do it, which requires a bit of work on the administrative end, is having all 3 sites pretend to run off of the same domain. To do this:
You must have a domain that you control, which we will call UnifyingDomain.com (This top level domain can contain subdomains)
The 2 sites that YOU control would need a JavaScript line of “document.domain='UnifyingDomain.com';” somewhere in them. These 2 sites must also be run off of a subdomain of UnifyingDomain.com, (which can also be done through apache redirect directives).
The site that you do not control would need to be forwarded through your UnifyingDomain.com (not a subdomain) via an apache permanent redirect.
This may not work, if their site programmer is dumb and does not use proper relative links for everything (absolute links are the devil :-) ). If this is the case:
You can use a [http] proxy to pull in their site through your domain (in which case, if you wanted, you could inject a “domain=”)
You can use the domain that you do not control as the top level UnifyingDomain.com, and add rules into your computer’s hostname files to redirect its subdomains to your IPs.
This project is why I ended up making my HTTP Forwarders client in go (coming soon).
It has always really bugged me that in Chrome, when you want to view the response and form data for an AJAX request listed in the console, you have to go through multiple annoying clicks to view these two pieces of data, which are also on separate tabs. There is a great Chrome extension though called AJAX-Debugger that gets you all the info you need on the console. However, it also suffered from the having-to-click-through problem for the request data (5th object deep in an object nest), and it did not support JSONP. I’ve gone ahead and fixed these 2 problems :-).
Now to get around to making the other Chrome plugin I’ve been needing for a while ... (Automatic devtool window focus when focusing its parent window)
[Edit on 2015-08-20 @ 8:10am] I added another patch to the Chrome extension to atomically run the group calls (otherwise, they sometimes showed out of order).
Also, the auto focusing thing is not possible as a pure extension due to chrome API inadequacies. While it would be simple to implement using an interval poll via something like Auto Hot Key, I really hate [the hack of] making things constantly poll to watch for something. I’m thinking of a hybrid chrome extension+AHK script as a solution.
Here is a little Tampermonkey script for Chrome that automatically clicks the “Continue playing” button when it pops up on Netflix, pausing the current stream.
// ==UserScript== // @name Netflix auto continue play // @namespace https://www.castledragmire.com/Posts/Netflix_Auto_Continue_Play // @version 1.0 // @description When netflix pops up the "Continue play" button, this script auto-selects "Continue" within 1 second // @author Dakusan // @match http://www.netflix.com/ // @grant none // ==/UserScript==
I just threw together a quick script to report status on a MySQL replication ring. While replication rings have been the only real multi-master MySQL solution for replication (with the ability for nodes to go down without majorly breaking things) until recently, I have read that MariaDB (still not MySQL) now allows a slave to have multiple masters, meaning many replication topologies are now possible (star, mesh, etc). This script could easily be adapted for those circumstances too.
This script will report all the variables from “SHOW MASTER STATUS” and “SHOW SLAVE STATUS” from all servers in your replication ring, in a unified table. It also includes a “Pretty Status” row that lets you quickly see how things look. The possibilities for this row are:
Bad state: ... This shows if the Slave_IO_State is not “Waiting for master to send event”
Cannot determine master’s real position This shows if the Position variable on the master could not be read
On old master file This shows if the slave’s “Master_Log_File” variable does not equal the master’s “File” variable
Bytes behind: xxx This shows if none of the above errors occurred. It subtracts the master’s “Position” from the slave’s “Read_Master_Log_Pos”. This should generally be at or around 0. A negative value essentially means 0 (this should only happen between the last and first server).
The “Seconds_Behind_Master” variable can also be useful for determining the replication ring’s current replication status.
The code is below the example. The entire source file can also be found here. The 3 variables that need to be configured are at the top of the file. It assumes that all servers are accessible via the single given username and password.
Example:
Master
Server Name
EXAMPLE1.MYDOMAIN.COM
EXAMPLE2
File
mysql-bin.000003
mysql-bin.000011
Position
25249746
3215834
Binlog_Do_DB
example_data,devexample_data
example_data,devexample_data
Binlog_Ignore_DB
Slave
Pretty Status
Bytes behind: 0
Bytes behind: 0
Slave_IO_State
Waiting for master to send event
Waiting for master to send event
Master_Host
EXAMPLE2
EXAMPLE1.MYDOMAIN.COM
Master_User
example_slave
example_slave
Master_Port
3306
3306
Connect_Retry
60
60
Master_Log_File
mysql-bin.000011
mysql-bin.000003
Read_Master_Log_Pos
3215834
25249746
Relay_Log_File
www-relay-bin.070901
www-relay-bin.071683
Relay_Log_Pos
252
252
Relay_Master_Log_File
mysql-bin.000011
mysql-bin.000003
Slave_IO_Running
Yes
Yes
Slave_SQL_Running
Yes
Yes
Replicate_Do_DB
example_data,devexample_data
example_data,devexample_data
Replicate_Ignore_DB
Replicate_Do_Table
Replicate_Ignore_Table
Replicate_Wild_Do_Table
Replicate_Wild_Ignore_Table
Last_Errno
0
0
Last_Error
Skip_Counter
0
0
Exec_Master_Log_Pos
3215834
25249746
Relay_Log_Space
552
552
Until_Condition
None
None
Until_Log_File
Until_Log_Pos
0
0
Master_SSL_Allowed
No
No
Master_SSL_CA_File
Master_SSL_CA_Path
Master_SSL_Cert
Master_SSL_Cipher
Master_SSL_Key
Seconds_Behind_Master
0
0
Master_SSL_Verify_Server_Cert
No
No
Last_IO_Errno
0
0
Last_IO_Error
Last_SQL_Errno
0
0
Last_SQL_Error
Replicate_Ignore_Server_Ids
Not given
Master_Server_Id
2
Not given
Code:
<? //Configurations $Servers=Array('SERVER1.YOURDOMAIN.COM', 'SERVER2.YOURDOMAIN.COM'); //List of host names to access mysql servers on. This must be in the order of the replication ring. $SlaveUserName='SLAVE_RING_USERNAME'; //This assumes all servers are accessible via this username with the same password $SlavePassword='SLAVE_RING_PASSWORD';
//Get the info for each server $ServersInfo=Array(); //SERVER_NAME=>Array('Master'=>Array(Col1=>Val1, ...), 'Slave'=>Array(Col1=>Val1, ...) $ColsNames=Array('Master'=>Array('Server Name'=>0), 'Slave'=>Array('Pretty Status'=>0)); //The column names for the 2 (master and slave) queries. Custom column names are also added here $CustomFieldNames=array_merge($ColsNames['Master'], $ColsNames['Slave']); //Store the custom column names so they are not HTML escaped later foreach($Serversas$ServerName) { //Connect to the server $Link=@newmysqli($ServerName, $SlaveUserName, $SlavePassword); if($Link->connect_error) die(EHTML("Connection error to $ServerName server: $Link->connect_error"));
//Get the replication status info from the server $MyServerInfo=$ServersInfo[$ServerName]=Array( 'Master'=>$Link->Query('SHOW MASTER STATUS')->fetch_array(MYSQLI_ASSOC), 'Slave'=>$Link->Query('SHOW SLAVE STATUS')->fetch_array(MYSQLI_ASSOC) ); mysqli_close($Link); //Close the connection
//Gather the column names foreach($ColsNamesas$ColType=>&$ColNames) foreach($MyServerInfo[$ColType] as$ColName=>$Dummy) $ColNames[$ColName]=0; } unset($ColNames);
//Gather the pretty statuses foreach($Serversas$Index=>$ServerName) { //Determine the pretty status $SlaveInfo=$ServersInfo[$ServerName]['Slave']; $MasterInfo=$ServersInfo[$Servers[($Index+1)%count($Servers)]]['Master']; if($SlaveInfo['Slave_IO_State']!='Waiting for master to send event') $PrettyStatus='Bad state: '.EHTML($SlaveInfo['Slave_IO_State']); elseif(!isset($MasterInfo['Position'])) $PrettyStatus='Cannot determine master’s real position'; elseif($SlaveInfo['Master_Log_File']!=$MasterInfo['File']) $PrettyStatus='On old master file'; else $PrettyStatus='Bytes behind: '.($MasterInfo['Position']-$SlaveInfo['Read_Master_Log_Pos']);
//Add the server name and pretty status to the output columns $ServersInfo[$ServerName]['Master']['Server Name']='<div class=ServerName>'.EHTML($ServerName).'</div>'; $ServersInfo[$ServerName]['Slave']['Pretty Status']='<div class=PrettyStatus>'.EHTML($PrettyStatus).'</div>'; }
//Output the document functionEHTML($S) { returnhtmlspecialchars($S, ENT_QUOTES, 'UTF-8'); } //Escape HTML ?> <!DOCTYPE html> <html> <head> <title>Replication Status</title> <metacharset="UTF-8"> <style> table { border-collapse:collapse; } tabletr>* { border:1pxsolidblack; padding:3px; } th { text-align:left; font-weight:bold; } .ReplicationDirectionType { font-weight:bold; text-align:center; color:blue; } .ServerName { font-weight:bold; text-align:center; color:red; } .PrettyStatus { font-weight:bold; color:red; } .NotGiven { font-weight:bold; } </style> </head> <body><table> <? //Output the final table foreach($ColsNamesas$Type=>$ColNames) //Process by direction type (Master/Slave) then columns { print'<tr><td colspan='.(count($Servers)+1).' class=ReplicationDirectionType>'.$Type.'</td></tr>'; //Replication direction (Master/Server) type title column foreach($ColNamesas$ColName=>$Dummy) //Process each column name individually { print'<tr><th>'.EHTML($ColName).'</th>'; //Column name $IsHTMLColumn=isset($CustomFieldNames[$ColName]); //Do not escape HTML on custom fields foreach($ServersInfoas$ServerInfo) //Output the column for each server if($IsHTMLColumn) //Do not escape HTML on custom fields print'<td>'.$ServerInfo[$Type][$ColName].'</td>'; else//If not a custom field, output the escaped HTML of the value. If the column does not exist for this server (different mysql versions), output "Not given" print'<td>'.(isset($ServerInfo[$Type][$ColName]) ? EHTML($ServerInfo[$Type][$ColName]) : '<div class=NotGiven>Not given</div>').'</td>'; print'</tr>'; } } ?> </table></body> </html>
One final note. When having this script run, you might need to make sure none of the listed server IPs evaluates to localhost (127.x.x.x), as MySQL may instead then use the local socket pipe, which may not work with users who only have REPLICATION permissions and a wildcard host.
So for a long time, http://castledragmire.com (without the www) brought up an intro page, made in flash, while with the www brought this projects page (Dakusan’s Domain) up. I’ve decided it was time to retire it, if anyone ever stumbled upon it. I think I made this flash in December of 2004, using the PSD of the below image, that my good Friend Adam Shen had kindly provided to me. I would never claim to be good at design or making visual things, but I was proud of it :-)
Description: An interface to manage Recipes for Guild Wars 2. The interface allows filtering and sorting recipes by many variables. It also has user toggleable checkboxes per recipe that you can use to group and filter recipes. For example, all recipes with the first checkbox selected might be recipes your primary character already knows.
Information:
This project has 2 parts. The first pulls all of the Item and Recipe info for Guild Wars 2 into a database. The second is a client side only recipe management interface (no server processing).
I threw this together for a friend in 8 hours, as some of its functionality coincided with stuff I needed for another one of my projects. It was not meant to be pretty, so the interface is a bit spartan, and the code comments are a bit lacking. It also doesn’t check user input very thoroughly :-) .
So I got a new computer back in April and have finally gotten around to doing some speed tests to see how different applications and settings affect performance/harddrive read speed.
The following is the (relevant) computer hardware configuration:
Motherboard: MSI Z87-GD65
CPU: Intel Core i7-4770K Haswell 3.5GHz
GPU: GIGABYTE GV-N770OC-4GD GeForce GTX 770 4GB
RAM: Crucial Ballistix Tactical 2*8GB
2*Solid state hard drives (SDD): Crucial M500 480GB SATA 2.5" 7mm
7200RPM hard drive (HDD): Seagate Barracuda 3TB ST3000DM001
Power Supply: RAIDMAX HYBRID 2 RX-730SS 730W
CPU Water Cooler: CORSAIR H100i
Case Fans: 2*Cooler Master MegaFlow 200, 200mm case fan
Test setup:
I started with a completely clean install of Windows 7 Ultimate N x64 to gather these numbers.
The first column is the boot time, from the time the start of the "Starting Windows" animation shows to when the user login screen shows up, so the BIOS is not included. I used a stopwatch to get these boot numbers (in seconds), so they are not particularly accurate.
The second and third columns are the time (in seconds) to run a "time md5sum" on cygwin64 on a 1.39GB file (1,503,196,839 bytes), on the solid state (SDD) and 7200RPM (HDD) drives respectively. They are taken immediately after boot so caching and other applications using resources are not variables. I generally did not worry about running the tests multiple times and taking lowest case numbers. The shown milliseconds fluctuations are within margin of error for software measurements due to context switches.
Results:
Boot times are affected between multiple steps, as seen below, but not too bad. The only thing that affected the MD5sum was adding the hardware mirror raid on the SSDs, which dropped the time of the md5 by half. So overall, antivirus and system encryption did not have any noticeable affect on the computer's performance (at least regarding IO on a single file and number crunching).
Numbers:
What was added
Boot
SSD
HDD
Notes
Initial installation
4
-
-
NIC Drivers and Cygwin
7
4.664
8.393
I'm not sure why the boot time jump so much at this point. The initial number might have been a fluke.
All Windows updates + drivers + 6 monitors
14
4.618
8.393
The boot time jumped up a lot due to having to load all the monitors
Raid 1 mirror[Windows] on SSDs + no page file
17
4.618
8.393
This was removed once I realized Truecrypt could not be used on a dynamic disk (Windows software) RAID
Raid 1 mirror[hardware] on SSDs + no page file
17
2.246
8.408
Truecrypt System Volume Encryption (SSD Raid Only)
Information: My music directories have been growing for over 2 decades in a folder based hierarchy, often using playlists for organization. Plex’s music organization is counterintuitive to this organizational structure, and Plex currently does not have an easy way to import external playlists. Hence this script was born :-)