开发者

Core dumped, but core file is not in the current directory?

While running a C program, It says "(core dumped)" but I can't see any files under the current 开发者_StackOverflowpath.

I have set and verified the ulimit:

ulimit -c unlimited 
ulimit -a 

I also tried to find a file named "core", but didn't get the core dumped file?

Any help, where is my core file?


Read /usr/src/linux/Documentation/sysctl/kernel.txt.

core_pattern is used to specify a core dumpfile pattern name.

  • If the first character of the pattern is a '|', the kernel will treat the rest of the pattern as a command to run. The core dump will be written to the standard input of that program instead of to a file.

Instead of writing the core dump to disk, your system is configured to send it to the abrt (meaning: Automated Bug Reporting Tool, not "abort") program instead. Automated Bug Reporting Tool is possibly not as documented as it should be...

In any case, the quick answer is that you should be able to find your core file in /var/cache/abrt, where abrt stores it after being invoked. Similarly, other systems using Apport may squirrel away cores in /var/crash, and so on.


On recent Ubuntu (12.04 in my case), it's possible for "Segmentation fault (core dumped)" to be printed, but no core file produced where you might expect one (for instance for a locally compiled program).

This can happen if you have a core file size ulimit of 0 (you haven't done ulimit -c unlimited) -- this is the default on Ubuntu. Normally that would suppress the "(core dumped)", cluing you into your mistake, but on Ubuntu, corefiles are piped to Apport (Ubuntu's crash reporting system) via /proc/sys/kernel/core_pattern, and this seems to cause the misleading message.

If Apport discovers that the program in question is not one it should be reporting crashes for (which you can see happening in /var/log/apport.log), it falls back to simulating the default kernel behaviour of putting a core file in the cwd (this is done in the script /usr/share/apport/apport). This includes honouring ulimit, in which case it does nothing. But (I assume) as far as the kernel is concerned, a corefile was generated (and piped to apport), hence the message "Segmentation fault (core dumped)".

Ultimately PEBKAC for forgetting to set ulimit, but the misleading message had me thinking I was going mad for a while, wondering what was eating my corefiles.

(Also, in general, the core(5) manual page -- man 5 core -- is a good reference for where your core file ends up and reasons it might not be written.)


With the launch of systemd, there's another scenario as well. By default systemd will store core dumps in its journal, being accessible with the systemd-coredumpctl command. Defined in the core_pattern-file:

$ cat /proc/sys/kernel/core_pattern 
|/usr/lib/systemd/systemd-coredump %p %u %g %s %t %e

Easiest way to check for stored core dumps is via coredumpctl list (older core dumps may have been removed automatically). This behaviour can be disabled with a simple "hack":

$ ln -s /dev/null /etc/sysctl.d/50-coredump.conf
$ sysctl -w kernel.core_pattern=core      # or just reboot

As always, the size of core dumps has to be equal or higher than the size of the core that is being dumped, as done by for example ulimit -c unlimited.


Writing instructions to get a core dump under Ubuntu 16.04 LTS:

  1. As @jtn has mentioned in his answer, Ubuntu delegates the display of crashes to apport, which in turn refuses to write the dump because the program is not an installed package.

    Core dumped, but core file is not in the current directory?

  2. To remedy the problem, we need to make sure apport writes core dump files for non-package programs as well. To do so, create a file named ~/.config/apport/settings with the following contents:
    [main] unpackaged=true

  3. Now crash your program again, and see your crash files being generated within folder: /var/crash with names like *.1000.crash. Note that these files cannot be read by gdb directly.

    Core dumped, but core file is not in the current directory?

  4. [Optional] To make the dumps readble by gdb, run the following command:

    apport-unpack <location_of_report> <target_directory>

References: Core_dump – Oracle VM VirtualBox


I could think of two following possibilities:

  1. As others have already pointed out, the program might chdir(). Is the user running the program allowed to write into the directory it chdir()'ed to? If not, it cannot create the core dump.

  2. For some weird reason the core dump isn't named core.* You can check /proc/sys/kernel/core_pattern for that. Also, the find command you named wouldn't find a typical core dump. You should use find / -name "*core.*", as the typical name of the coredump is core.$PID


In Ubuntu18.04, the most easist way to get a core file is inputing the command below to stop the apport service.

sudo service apport stop

Then rerun the application, you will get dump file in current directory.


If you're missing core dumps for binaries on RHEL and when using abrt, make sure that /etc/abrt/abrt-action-save-package-data.conf

contains

ProcessUnpackaged = yes

This enables the creation of crash reports (including core dumps) for binaries which are not part of installed packages (e.g. locally built).


For fedora25, I could find core file at

/var/spool/abrt/ccpp-2017-02-16-16:36:51-2974/coredump

where ccpp-2017-02-16-16:36:51-2974" is pattern "%s %c %p %u %g %t %P % as per `/proc/sys/kernel/core_pattern'


My efforts in WSL have been unsuccessful.

For those running on Windows Subsystem for Linux (WSL) there seems to be an open issue at this time for missing core dump files.

The comments indicate that

This is a known issue that we are aware of, it is something we are investigating.

Github issue

Windows Developer Feedback


In my case, the reason is that the ulimit command only effect the current terminal.

If I set ulimit -c unlimited on the first terminal. Then I start a new terminal to run the program. It will not generate the core file when core dumped.

You have to confirm the core size of the terminal which runs your program.

The following steps work on ubuntu 20.04 and ubuntu 21.04:

  1. stop apport service
sudo service apport stop
  1. set core size of the terminal which ready to run your program
ulimit -c unlimited


I'm on Linux Mint 19 (Ubuntu 18 based). I wanted to have coredump files in current folder. I had to do two things:

  1. Change /proc/sys/kernel/core_pattern (by # echo "core.%p.%s.%c.%d.%P > /proc/sys/kernel/core_pattern or by # sysctl -w kernel.core_pattern=core.%p.%s.%c.%d.%P)
  2. Raising limit for core file size by $ ulimit -c unlimited

That was written already in the answers, but I wrote to summarize succinctly. Interestingly changing limit did not require root privileges (as per https://askubuntu.com/questions/162229/how-do-i-increase-the-open-files-limit-for-a-non-root-user non-root can only lower the limit, so that was unexpected - comments about it are welcome).


I found core files of my Ubuntu 20.04 system at;

/var/lib/apport/coredump 


ulimit -c unlimited made the core file correctly appear in the current directory after a "core dumped".


If you use Fedora, in order to generate core dump file in the same directory of binary file:

echo "core.%e.%p" > /proc/sys/kernel/core_pattern

And

ulimit -c unlimited


A one-liner to get the latest core dump path:

ls -t $(cat /proc/sys/kernel/core_pattern | awk -F% '{print $1"*"}') 2>/dev/null | head -1

You can of course modify the last -1 on that line to e.g. -4 to get the last 4 core dumps.

Note: That's not expected to work e.g. in case the path pattern uses variables before the last / or when non core dump files are on that dir.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜