SIGforum.com    Main Page  Hop To Forum Categories  The Lounge    Linux users. A quick crontab question.
Go
New
Find
Notify
Tools
Reply
  
Linux users. A quick crontab question. Login/Join 
W07VH5
Picture of mark123
posted
I've got a small homelab going. I've got 3 distinct servers at the moment. The first is my pfsense (router/firewall/DNS/DHCP/ad_block). Second is my NAS (FreeNAS, no longer running any virtual machines so it is specifically a backup file server). And the most recent addition is the checkbook register server (mentioned here: https://sigforum.com/eve/forum...601935/m/5190061784/ ).

I've got the crontab for the database server's user (Postgresql) set to back up the database just after midnight on Sunday and then 30 minutes later scp that backup over to the NAS.

However, I don't usually leave the NAS on. I just turn it on when I need it. So what happens to the cron job if I forget to turn on the NAS to receive the file? Will it just abort? Will the backup file creation still happen on the checkbook server?

Here is the current crontab:
1 0 * * 7 pg_dump -U postgres checkbook > ~/backups/checkbook_$(date +\%Y-\%m-\%d).bak
31 0 * * 0 scp ~/backups/checkbook_$(date +\%Y-\%m-\%d).bak postgres@freenas:/mnt/NAS1Pool/db_backups


Is there a better way to do the cron so that it only copies over files that don't already exit?
 
Posts: 45565 | Location: Pennsyltucky | Registered: December 05, 2001Reply With QuoteReport This Post
Member
posted Hide Post
Check out the rsync command.
 
Posts: 462 | Location: Illinois | Registered: June 13, 2020Reply With QuoteReport This Post
W07VH5
Picture of mark123
posted Hide Post
quote:
Originally posted by Jimmo952:
Check out the rsync command.
Yeah, that does make a little more sense. Changed to:
31 0 * * 0 rsync -a ~/backups postgres@freenas:/mnt/NAS1Pool/db_backups
 
Posts: 45565 | Location: Pennsyltucky | Registered: December 05, 2001Reply With QuoteReport This Post
Member
posted Hide Post
First, yep, it will just fail to copy. If you want guard against NAS availability, try this.
Script backup to write to temp file, then mv the file into the backups directory. Set your rsync to fire every… eh, 30 minutes, maybe, to sync the backups directory against the NAS. It will only push differences, not re-copy everything. If it fails because the NAS isn’t up, it will simply catch back up again when the NAS host is up again.


--
I always prefer reality when I can figure out what it is.

JALLEN 10/18/18
https://sigforum.com/eve/forum...610094844#7610094844
 
Posts: 2395 | Location: Roswell, GA | Registered: March 10, 2009Reply With QuoteReport This Post
Optimistic Cynic
Picture of architect
posted Hide Post
The delay is to ensure that job1 will finish before job2 runs? Why not put the two commands in a shell script and just run the shell script in a single crontab entry? If you want to get fancy, you can put logic in the script to check for server availability before attempting the copy, and to try again later if the server is unavailable (or just check the exit status of the copy procedure, and re-try after a delay if it fails).

Or, put your script in /etc/cron.daily and let the regularly-scheduled system jobs run it for you (use existing daily scripts as templates for yours).

I agree that rsync to copy between machine will give better results than scp. Rsync will do its magic inside a ssh pipe if interception on the wire is a concern.

Even if you are saving date-stamped snapshots not updating a single target, rsync will be faster than scp.

The problem with running rsync at short intervals is that multiple processes may interfere with each other, defeating the "only transfer what's new" logic.

And, oh yeah, put a slash on the end of your rsync command to avoid creating a new directory under /mnt/NAS1Pool/db_backups
 
Posts: 6794 | Location: NoVA | Registered: July 22, 2009Reply With QuoteReport This Post
Member
posted Hide Post
quote:
Originally posted by architect:
The problem with running rsync at short intervals is that multiple processes may interfere with each other, defeating the "only transfer what's new" logic.


Agreed, every 30 should allow plenty for what’s going to be tiny db dumps to keep up. Or add logic to check for existing and exit.


--
I always prefer reality when I can figure out what it is.

JALLEN 10/18/18
https://sigforum.com/eve/forum...610094844#7610094844
 
Posts: 2395 | Location: Roswell, GA | Registered: March 10, 2009Reply With QuoteReport This Post
W07VH5
Picture of mark123
posted Hide Post
quote:
Originally posted by SigJacket:
First, yep, it will just fail to copy. …
Thanks. I wasn’t sure how crontab handled such errors. I was pretty sure it wouldn’t do something silly like just stop all crons but I wanted to be positive.
 
Posts: 45565 | Location: Pennsyltucky | Registered: December 05, 2001Reply With QuoteReport This Post
W07VH5
Picture of mark123
posted Hide Post
quote:
Originally posted by SigJacket:
quote:
Originally posted by architect:
The problem with running rsync at short intervals is that multiple processes may interfere with each other, defeating the "only transfer what's new" logic.


Agreed, every 30 should allow plenty for what’s going to be tiny db dumps to keep up. Or add logic to check for existing and exit.
well, right now I’m running it weekly. So, I’m sure it’ll be good to go.

I could run the rsync more often since it only copies files that don’t exist.

architect, I did add the / after seeing the extra backups folder on the NAS.
 
Posts: 45565 | Location: Pennsyltucky | Registered: December 05, 2001Reply With QuoteReport This Post
Itchy was taken
Picture of scratchy
posted Hide Post
quote:
Originally posted by mark123:
quote:
Originally posted by SigJacket:
First, yep, it will just fail to copy. …
Thanks. I wasn’t sure how crontab handled such errors. I was pretty sure it wouldn’t do something silly like just stop all crons but I wanted to be positive.


cron itself will run, and the script will simply fail. Error handling and precondition tests built into the script that cron runs is where the smarts are. Test the connection, and fail before running the actual work should do the trick. have it email you
the result.

cron will log the success or failure.


_________________
This space left intentionally blank.
 
Posts: 4099 | Location: Colorado | Registered: August 24, 2008Reply With QuoteReport This Post
Baroque Bloke
Picture of Pipe Smoker
posted Hide Post
quote:
Originally posted by mark123:
quote:
Originally posted by Jimmo952:
Check out the rsync command.
Yeah, that does make a little more sense. Changed to:
31 0 * * 0 rsync -a ~/backups postgres@freenas:/mnt/NAS1Pool/db_backups

I suggest two additions to the option list of your rsync command.

#1: “-H” so that hard links in the source will be preserved as such in the destination (rather than being rendered as multiple identical, but unrelated, files).

#2: “—del” so that objects that you’ve deleted from the source will be removed from the destination.

With those two additions your rsync command would be:
rsync -aH —del



Serious about crackers
 
Posts: 9474 | Location: San Diego | Registered: July 26, 2014Reply With QuoteReport This Post
W07VH5
Picture of mark123
posted Hide Post
quote:
Originally posted by Pipe Smoker:
#1: “-H” so that hard links in the source will be preserved as such in the destination (rather than being rendered as multiple identical, but unrelated, files).

#2: “—del” so that objects that you’ve deleted from the source will be removed from the destination.

With those two additions your rsync command would be:
rsync -aH —del


I'm not sure I understand the -H switch. There are no multiple identical files on the destination.

I want the backup to retain files deleted from there source.
 
Posts: 45565 | Location: Pennsyltucky | Registered: December 05, 2001Reply With QuoteReport This Post
Optimistic Cynic
Picture of architect
posted Hide Post
quote:
Originally posted by mark123:
quote:
Originally posted by Pipe Smoker:
#1: “-H” so that hard links in the source will be preserved as such in the destination (rather than being rendered as multiple identical, but unrelated, files).

#2: “—del” so that objects that you’ve deleted from the source will be removed from the destination.

With those two additions your rsync command would be:
rsync -aH —del


I'm not sure I understand the -H switch. There are no multiple identical files on the destination.

I want the backup to retain files deleted from there source.
File systems on Unix-like OS's have the capability to create both "hard" and "soft" links to a file object. The difference is that a hard link is an additional name for the file while a soft like in a pointer reference to the file. An application program will usually treat a hard link as a separate file, so without the -H you will have two copies of the same file on the backup media, with two different names.

Example:

[asok:/tmp] gfoster% echo "this is a text file" > test.txt
[asok:/tmp] gfoster% ln test.txt test.eml
[asok:/tmp] gfoster% ln -s test.txt softtest.link
[asok:/tmp] gfoster% ls -l *test*
lrwxr-xr-x 1 gfoster wheel 8 Oct 10 17:02 softtest.link -> test.txt
-rw-r--r-- 2 gfoster wheel 20 Oct 10 17:01 test.eml
-rw-r--r-- 2 gfoster wheel 20 Oct 10 17:01 test.txt
[asok:/tmp] gfoster% cat softtest.link
this is a text file

The number in the second column of the directory listing is the number of (hard) links a file object has associated with it. The number in the fifth column is the size of the file object in bytes. As you can see, soft links (more commonly called symbolic links or symlinks) use less space, in most modern file systems they exist only in the directory entry itself, therefore consuming zero disk space.

Hard links can be located anywhere in the file system always referring to the file object, symlinks are path dependent.
 
Posts: 6794 | Location: NoVA | Registered: July 22, 2009Reply With QuoteReport This Post
Baroque Bloke
Picture of Pipe Smoker
posted Hide Post
quote:
Originally posted by mark123:
quote:
Originally posted by Pipe Smoker:
#1: “-H” so that hard links in the source will be preserved as such in the destination (rather than being rendered as multiple identical, but unrelated, files).

#2: “—del” so that objects that you’ve deleted from the source will be removed from the destination.

With those two additions your rsync command would be:
rsync -aH —del


I'm not sure I understand the -H switch. There are no multiple identical files on the destination.

I want the backup to retain files deleted from there source.

If you don’t want to use the “—del” option, fine. Your choice. As for me, I want the destination to be identical to the source after my rsync command executes, so I need the “—del” option.

The “-H” option: Apparently you believe that your source has no hard links. You may be correct. I created a bash command to list paths to all hard links in and below a specified directory, and I’m often surprised to see how many there are. So I need the “-H” so that my destination is identical to my source after my rsync command executes.

BTW – your “-a” option preserves symlinks in your source as symlinks in your destination, so you’re safe on those.



Serious about crackers
 
Posts: 9474 | Location: San Diego | Registered: July 26, 2014Reply With QuoteReport This Post
Member
Picture of wrightd
posted Hide Post
write a stand-alone script that handles everything - logic, environment, pre-condition testing, error handling and notification, etc. Then just call the script from crontab, a single command (and parameters if needed). For example, your script would first test for pre-conditions: for example, is NAS up ? if not write an error log or send a text or email message etc. Those kinds of things. It doesn't matter what language you use as long as it works. I like bash, ksh, python, awk, sed, whatever etc. those programming utilities play very nicely together.

If you don't want/need a script, just string your commands together on the crontab line, separated by ; and the entire string of commands enclosed in double quotes. "do stuff a; do stuff b; test result with embedded if logic" like that. If that gets out of hand just do the stand alone script thing.

Regarding rsync, it rocks, lots of history and loads of example code on the net. Like lots of code on the net, you may have to wade through lots of crap before you find nuggets.




Lover of the US Constitution
Wile E. Coyote School of DIY Disaster
 
Posts: 8931 | Location: Nowhere the constitution is not honored | Registered: February 01, 2008Reply With QuoteReport This Post
Baroque Bloke
Picture of Pipe Smoker
posted Hide Post
quote:
Originally posted by architect:
<snip>
Hard links can be located anywhere in the file system always referring to the file object, symlinks are path dependent.

Are you sure that hard links can be located anywhere in the file system?

It used to be that hard linked objects had to be on the same disk. Symlinked objects could be on different disks, but not hard linked objects.



Serious about crackers
 
Posts: 9474 | Location: San Diego | Registered: July 26, 2014Reply With QuoteReport This Post
Optimistic Cynic
Picture of architect
posted Hide Post
quote:
Originally posted by Pipe Smoker:
quote:
Originally posted by architect:
<snip>
Hard links can be located anywhere in the file system always referring to the file object, symlinks are path dependent.

Are you sure that hard links can be located anywhere in the file system?;
I'll amend that to "anywhere in a file system that supports hard links"

quote:
It used to be that hard linked objects had to be on the same disk. Symlinked objects could be on different disks, but not hard linked objects.
Yes, I remember that. But Mark's post references two FreeBSD-based systems (pfsense and FreeNAS). FreeBSD for decades has used a ufs that keeps file metadata, including the disk/file system blocks a file occupies in the file's directory entry on the disk, so this limitation became deprecated. A hard link, on FreeBSD ufs at least, is a directory entry only object.

And, if I were to quibble, it was never "on the same disk" but "within the same file system." The advent of multi-volume file systems (e.g. RAID, etc.) has changed the whole notion of disk-oriented storage management.
 
Posts: 6794 | Location: NoVA | Registered: July 22, 2009Reply With QuoteReport This Post
  Powered by Social Strata  
 

SIGforum.com    Main Page  Hop To Forum Categories  The Lounge    Linux users. A quick crontab question.

© SIGforum 2024