Laravel queue environment

Ran in to an issue running queue through supervisor. I’ll give a quick rundown of my setup, the issue and solution.

I have supervisor running the queue daemon.

[program:laravel_queue]
command=php /var/www/example.com/artisan queue:listen
autostart=true
autorestart=true
stderr_logfile=/var/log/laraqueue.err.log
stdout_logfile=/var/log/laraqueue.out.log

The key part check on is the env variable isn’t set automatically for the queue. You’ll need to set it via the —env flag.

[program:laravel_queue]
command=php /var/www/example.com/artisan queue:listen --env=prod
autostart=true
autorestart=true
stderr_logfile=/var/log/laraqueue.err.log
stdout_logfile=/var/log/laraqueue.out.log

That’s what I was missing. Once I `sudo supervisorctl restart laravel_queue` all was well with the queue.

Docker remove ALL volumes not attached

Docker can be tricky to debug. There’s instances where after upgrading a container the volume that was previously attached fails to work.

In this case it was a dev image and the quickest, and dirtiest, option was to clear out and rebuild. I could go one by one to delete the specific volumes.

docker volume ls
docker volume rm {specific_docker_container}

Instead I chose to delete all volumes not currently in use. This worked for my use case as I only have two dockers setups, left the one I wanted to keep running while executing the following commands.

docker volume rm `docker volume ls -q -f dangling=true`

grep without filenames

I needed to grep a directory, and sub directories, and not displaying the filenames in the output. This is when the man page came to the rescue.

man grep | grep filename

Output

       -H, --with-filename
       -h, --no-filename

There we go, -h, –no-filename, is what I needed.

Example with output to a file:

grep -r "searching-for-this" . --no-filename > /tmp/test.txt

Vagrant error on up

I was executing a vagrant up on my machine. It was giving an nfs error.

mount -o 'vers=3,udp' 192.168.10.1:'/opt/<snip>/mozilla/kuma' /home/vagrant/src

Stdout from the command:



Stderr from the command:

stdin: is not a tty
mount.nfs: requested NFS version or transport protocol is not supported

I went hunting for a resolution. I need to install nfs libs.

sudo apt-get install nfs-common nfs-kernel-server

Then it would boot.

Simple Python Web Scraper

I needed a simple html only scraper. (This doesn’t use js, won’t pull down data via AJAX). I found an example on another site, thetaranights.com, but it wasn’t exactly what I needed. It only pulled the data and printed it to screen. I added a list to loop through and auto saving by url name to a html file.

import mechanize  #pip install mechanize

br = mechanize.Browser()
br.set_handle_robots(False)
br.addheaders = [(&quot;User-agent&quot;,&quot;Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13&quot;)]

sign_in = br.open(&quot;https://this.example.com/login&quot;)  #the login url

br.select_form(nr = 0) #accessing form by their index. Since we have only one form in this example, nr =0.
#br.select_form(name = &quot;form name&quot;) Alternatively you may use this instead of the above line if your form has name attribute available.

br[&quot;email&quot;] = &quot;email or username&quot; #the key &quot;username&quot; is the variable that takes the username/email value

br[&quot;password&quot;] = &quot;password&quot;    #the key &quot;password&quot; is the variable that takes the password value

logged_in = br.submit()   #submitting the login credentials

logincheck = logged_in.read()  #reading the page body that is redirected after successful login

urls = [&quot;https://this.example.com/some/page&quot;,&quot;https://this.example.com/some/page2&quot;]

for url in urls:
	req = br.open(url).read()
	filename = url.split('/')[-1] + &quot;.html&quot;
	f = open(filename, 'w')
	f.write(req)
	f.close()

Which produces 2 files:
page.html
page2.html

How to check cp progress after cp has started

First, the easier way to do this is to use rsync in the first place.

rsync -avh –progress sourceDirectory destinationDirectory

However, sometime you think cp will be quick and you already kicked it off. Here is a quick way to check the progress of a copy command after you’ve already started.

watch lsof -p`pgrep -x cp`

This will let you know what it is transferring and how much it has left to do. In a way it provides a way to check the progress of the cp, copy comand.

To find out more check out these two SO links:

Check progress

CP Command Help

Ubuntu Command line Display Driver Change

Recently upgraded to 15.10 and that looked okay at first. However, the displa driver that it installed, NVIDIA, was causing issues. So I defaulted back to the open source driver and reboot. MISTAKE! Ended up causing the system to crash and couldn’t do anything about it.

I rebooted into root terminal. Then ran a command to display the available drivers.

ubuntu-drivers devices

The spit out a list of drivers. Then I reinstalled the latest available nvidia driver.

apt-get install nvidia-352-updates

Failed to spawn command gulp

I ran in to the error with Atom, Failed to spawn command gulp. The solution for me was to install the module globally.

$ sudo npm install -g gulp
/usr/bin/gulp -> /usr/lib/node_modules/gulp/bin/gulp.js
[email protected] /usr/lib/node_modules/gulp
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected] ([email protected])
├── [email protected] ([email protected])
├── [email protected]
├── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected])
├── [email protected] ([email protected], [email protected], [email protected])
├── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected])
├── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected])
└── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected])

Easy and done.

GIT push from one repo to another

Splitting off from a branch to another git repo seems like it would suck. In reality it’s simple. Two lines and you’re set.

Things you’ll need:

  • Source repo pulled down locally
  • Target repo created on your git server (GitHub, BitBucket, GitLab, etc)
  • Url for target git repo
  • Branches

$ cd /path/to/source
$ git remote add [email protected]:my_team/my_awesome_target_repo.git
$ git push targetrepo my_branch_to_create_off_of:master

That’ll create the master branch off of the branch “my_branch_to_create_off_of” from your local repo.