Add all the deleted changes using the command given below and then push it.
git add --all
Add all the deleted changes using the command given below and then push it.
git add --all
I’m trying to use Jenkins’ Publish Over SSH plugin to copy all files AND sub-directories of some given directory, but so far, I’ve only able to copy files and NOT directory.
I have a directory named
foo in my workspace, and during the build, I want to copy everything in this directory to a remote server. I’ve tried this pattern
foo/**, but it doesn’t copy all sub-directories.
For recursive copy of directory you should give
I have strange issue on the latest Jenkins 1.634.
Publish over ssh writes to log that it puts correctly file but nothing appears on remote server.
e.g. I have logs
SSH: cd [var/www/data-fb-localtest]
SSH: put [asm.js]
SSH: put [asm.js.gz]
SSH: put [hero.data]
SSH: put [hero_main.js]
SSH: cd [/home/dev]
SSH: cd [var/www/data-fb-localtest/]
SSH: put [achievements.exm]
SSH: put [ai.exm]
SSH: put [atlas0.atlas]
SSH: put [atlas0.rgbz]
but nothing appears in var/www/data-fb-localtest
I found the issue. I do not set root remote directory and in publish task use absolute path. But plugin does use not absolute path but path relative to my user’s home directory
Ceres is a time-series database format intended to replace Whisper as the default storage format for Graphite. In contrast with Whisper, Ceres is not a fixed-size database and is designed to better support sparse data of arbitrary fixed-size resolutions. This allows Graphite to distribute individual time-series across multiple servers or mounts.
Ceres is not actively developped at the moment. For alternatives to whisper look at alternative storage backends.
Ceres databases are comprised of a single tree contained within a single path on disk that stores all metrics in nesting directories as nodes.
A Ceres node represents a single time-series metric, and is composed of at least two data files. A slice to store all data points, and an arbitrary key-value metadata file. The minimum required metadata a node needs is a
'timeStep'. This setting is the finest resolution that can be used for writing. A Ceres node however can contain and read data with other, less-precise values in its underlying slice data.
Other metadata keys that may be set for compatibility with Graphite are
A Ceres slice contains the actual data points in a file. The only other information a slice holds is the timestamp of the oldest data point, and the resolution. Both of which are encoded as part of its filename in the format
Data points in Ceres are stored on-disk as a contiguous list of big-endian double-precision floats. The timestamp of a datapoint is not stored with the value, rather it is calculated by using the timestamp of the slice plus the index offset of the value multiplied by the resolution.
The timestamp is the number of seconds since the UNIX Epoch (01-01-1970). The data value is parsed by the Python float() function and as such behaves in the same way for special strings such as
'inf'. Maximum and minimum values are determined by the Python interpreter’s allowable range for float values which can be found by executing:
python -c 'import sys; print sys.float_info'
Ceres databases contain one or more slices, each with a specific data resolution and a timestamp to mark the beginning of the slice. Slices are ordered from the most recent timestamp to the oldest timestamp. Resolution of data is not considered when reading from a slice, only that when writing a slice with the finest precision configured for the node exists.
Gaps in data are handled in Ceres by padding slices with null datapoints. If the slice gap however is too big, then a new slice is instead created. If a Ceres node accumulates too many slices, read performance can suffer. This can be caused by intermittently reported data. To mitigate slice fragmentation there is a tolerance for how much space can be wasted within a slice file to avoid creating a new one. That tolerance level is determined by
'MAX_SLICE_GAP', which is the number of consecutive null datapoints allowed in a slice file.
If set very low, Ceres will waste less of the tiny bit disk space that this feature wastes, but then will be prone to performance problems caused by slice fragmentation, which can be pretty severe.
If set really high, Ceres will waste a bit more disk space. Although each null datapoint wastes 8 bytes, you must keep in mind your filesystem’s block size. If you suffer slice fragmentation issues, you should increase this or defrag the data more often. However you should not set it to be huge because then if a large but allowed gap occurs it has to get filled in, which means instead of a simple 8-byte write to a new file we could end up doing an
(8 * MAX_SLICE_GAP)-byte write to the latest slice.
Expected features such as roll-up aggregation and data expiration are not provided by Ceres itself, but instead are implemented as maintenance plugins.
Such a rollup plugin exists in Ceres that aggregates data points in a way that is similar behavior of Whisper archives. Where multiple data points are collapsed and written to a lower precision slice, and data points outside of the set slice retentions are trimmed. By default, an average function is used, however alternative methods can be chosen by changing the metadata.
When data is retrieved (scoped by a time range), the first slice which has data within the requested interval is used. If the time period overlaps a slice boundary, then both slices are read, with their values joined together. Any missing data between them are filled with null data points.
There is currently no support in Ceres for handling slices with mixed resolutions in the same way that is done with Whisper archives.
Do you occasionally share your Linux desktop machine with family members, friends or perhaps with colleagues at your workplace, then you have a reason to hide certain private files as well as folders or directories. The question is how can you do this?
In this tutorial, we will explain an easy and effective way to hide files and directories and view hidden files/directories in Linux from the terminal and GUI.
As we’ll see below, hiding files and directories in Linux is so simple.
To hide a file or directory from the terminal, simply append a dot
. at the start of its name as follows using the mv command.
$ ls $ mv mv sync.ffs_db .sync.ffs_db $ ls
Using GUI method, the same idea applies here, just rename the file by adding a
. at the start of its name as shown below.
Once you have renamed it, the file will still be seen, move out of the directory and open it again, it will be hidden thereafter.
To view hidden files, run the ls command with the
-a flag which enables viewing of all files in a directory or
-al flag for long listing.
$ ls -a OR $ ls -al
From a GUI file manager, go to View and check the option Show Hidden Files to view hidden files or directories.
In order to add a little security to your hidden files, you can compress them with a password and then hide them from a GUI file manager as follows.
Select the file or directory and right click on it, then choose Compress from the menu list, after seeing the compression preferences interface, click on “Other options” to get the password option as shown in the screenshot below.
Once you have set the password, click on Create.
From now on, each time anyone wants to open the file, they’ll be asked to provide the password created above.
Now you can hide the file by renaming it with a
. as we explained before.
fswatch is a cross-platform, file change monitor that gets notification alerts when the contents of the specified files or directories are altered or modified.
It executes four types of monitors on different operating systems such as:
Unfortunately, fswatch package is not available to install from the default system repositories in any Linux distributions. The only way to install the latest version of fswatch is to build from source tarball as show in the following installation instructions.
First grab the latest fswatch tarball using following wget command and install it as shown:
$ wget https://github.com/emcrisostomo/fswatch/releases/download/1.9.3/fswatch-1.9.3.tar.gz $ tar -xvzf fswatch-1.9.3.tar.gz $ cd fswatch-1.9.3 $ ./configure $ make $ sudo make install
Important: Make sure you’ve GNU GCC (C and C++ Compiler) and Development Tools (build-essential on Debian/Ubuntu) installed on the system, before you compile fswatch from source. If not, install it using following command on your respective Linux distributions..
# yum group install 'Development Tools' [On CentOS/RHEL] # dnf group install 'Development Tools' [On Fedora 22+ Versions] $ sudo apt-get install build-essential [On Debian/Ubuntu Versions]
On Debian/Ubuntu distributions, you might get following error while executing fswatch command..
fswatch: error while loading shared libraries: libfswatch.so.6: cannot open shared object file: No such file or directory
To fix it, you need to execute the command below, this will help refresh the links and cache to the dynamic libraries before you can start using fswatch.
$ sudo ldconfig
The general syntax for running fswatch is:
$ fswatch [option] [path]
On Linux, it is recommended that you use the default inotify monitor, you can list available monitors by employing the
- list-monitors option:
$ fswatch -M $ fswatch --list-monitors
The command below enables you to watch the changes in the current directory (
/home/tecmint), with events being delivered to standard output every 4 seconds.
-l or –
-latency option allows you to set the latency in seconds, the default being 1 second.
$ fswatch -l 4 .
The next command monitors changes to the /var/log/auth.log file every 5 seconds:
$ fswatch -l 5 /var/log/auth.log
--timestamp option prints the time stamp for every event, to print the time in UTC format, employ
--utf-time option. You can as well format time using
--format-time format option:
$ fswatch --timestamp /var/log/auth.log
--event-flags tells fswatch to print the event flags along side the event path. You can use –event-field-seperator option to print events using the particular separator.
$ fswatch --events-flags ~ /var/log/auth.log
To print the numeric value of an event indicating changes in your home directory and /var/log/auth.log file, use
--numeric option as below:
$ fswatch --numeric ~ /var/log/auth.log
Perhaps you can look through the fswatch man page for detailed usage options and information:
$ man fswatch
Have you ever wondered where the various files contained inside a package are installed (located) in the Linux file system? In this article, we’ll show how to list all files installed from or present in a certain package or group of packages in Linux.
This can help you to easily locate important package files like configurations files, documentation and more. Let’s look at the different methods of listing files in or installed from a package:
To install and use yum-utils, run the commands below:
# yum update # yum install yum-utils
Now you can list files of an installed RPM package, for example httpd web server (note that the package name is case-sensitive). The
--installed flag means installed packages and
-l flags enables listing of files:
# repoquery --installed -l httpd # dnf repoquery --installed -l httpd [On Fedora 22+ versions]
Important: In Fedora 22+ version, the repoquery command is integrated with dnf package manager for RPM based distribution to list files installed from a package as shown above.
Alternatively, you can as well use the rpm command below to list the files inside or installed on the system from a
.rpm package as follows, where the
-l means to list files in package receptively:
# rpm -ql httpd
Another useful option is used to use
-p to list
.rpm package files before installing it.
# rpm -qlp telnet-server-1.2-137.1.i586.rpm
On Debian/Ubuntu distributions, you can use the dpkg command with the
-L flag to list files installed to your Debian system or its derivatives, from a given
In this example, we will list files installed from apache2 web server:
$ dpkg -L apache2