Thanks for the feedback!
From my research though it seems like it would take some effort to
start a
process and not register it in /proc or at least, it would be by intention
to do so for that desired affect. I guess my ask here would be why would
ovirt do that? Is there a relative performance gain? What processes inside
ovirt would do such a thing?
Appreciate the help
On Wed, Mar 22, 2017 at 3:32 AM, Yedidyah Bar David <didi(a)redhat.com> wrote:
On Tue, Mar 21, 2017 at 7:54 PM, Charles Kozler
<ckozleriii(a)gmail.com>
wrote:
> Unfortunately by the time I am able to SSH to the server and start
looking
> around, that PID is no where to be found
Even if you do this immediately when OSSEC finishes?
Do you get from it only a single pid?
>
> So it seems something winds up in ovirt, runs, doesnt register in /proc
(I
> think even threads register themself in /proc),
Now did some tests. Seems like they do, but are only "visible" if you
access them directly, not if you e.g. 'ls -l /proc'.
> and then dies off
>
> Any ideas?
No idea about your specific issue. Based on your above question, did this:
# for pid in $(seq 32768); do if kill -0 $pid 2>/dev/null && ! ls -1
/proc | grep -qw $pid; then ps -e -T | grep -w $pid | awk '{print
$1}'; fi; done | sort -u | while read ppid; do echo number of threads:
$(ps -e -T | grep -w $ppid | wc -l) ps $ppid: $(ps -h -p $ppid); done
number of threads: 5 ps 1149: 1149 ? Ssl 0:23 /usr/bin/python -Es
/usr/sbin/tuned -l -P
number of threads: 3 ps 1151: 1151 ? Ssl 0:55 /usr/sbin/rsyslogd -n
number of threads: 2 ps 1155: 1155 ? Ssl 0:00 /usr/bin/ruby
/usr/bin/fluentd -c /etc/fluentd/fluent.conf
number of threads: 12 ps 1156: 1156 ? Ssl 4:49 /usr/sbin/collectd
number of threads: 16 ps 1205: 1205 ? Ssl 0:08 /usr/sbin/libvirtd --listen
number of threads: 6 ps 1426: 1426 ? Sl 23:57 /usr/bin/ruby
/usr/bin/fluentd -c /etc/fluentd/fluent.conf
number of threads: 32 ps 3171: 3171 ? S<sl 6:48 /usr/bin/python2
/usr/share/vdsm/vdsmd
number of threads: 6 ps 3173: 3173 ? Ssl 8:48 python /usr/sbin/momd -c
/etc/vdsm/mom.conf
number of threads: 7 ps 575: 575 ? SLl 0:14 /sbin/multipathd
number of threads: 3 ps 667: 667 ? SLsl 0:09 /usr/sbin/dmeventd -f
number of threads: 2 ps 706: 706 ? S<sl 0:00 /sbin/auditd -n
number of threads: 6 ps 730: 730 ? Ssl 0:00 /usr/lib/polkit-1/polkitd
--no-debug
number of threads: 3 ps 735: 735 ? Ssl 0:31 /usr/bin/python
/usr/bin/ovirt-imageio-daemon
number of threads: 4 ps 741: 741 ? S<sl 0:00 /usr/bin/python2
/usr/share/vdsm/supervdsmd --sockfile /var/run/vdsm/svdsm.sock
number of threads: 2 ps 743: 743 ? Ssl 0:00 /bin/dbus-daemon --system
--address=systemd: --nofork --nopidfile --systemd-activation
number of threads: 6 ps 759: 759 ? Ssl 0:00 /usr/sbin/gssproxy -D
number of threads: 5 ps 790: 790 ? SLsl 0:09 /usr/sbin/sanlock daemon
(There are probably more efficient ways to do this, nvm).
>
> On Tue, Mar 21, 2017 at 3:10 AM, Yedidyah Bar David <didi(a)redhat.com>
wrote:
>>
>> On Mon, Mar 20, 2017 at 5:59 PM, Charles Kozler <ckozleriii(a)gmail.com>
>> wrote:
>> > Hi -
>> >
>> > I am wondering why OSSEC would be reporting hidden processes on my
ovirt
>> > nodes? I run OSSEC across the infrastructure and multiple ovirt
clusters
>> > have assorted nodes that will report a process is running but does not
>> > have
>> > an entry in /proc and thus "possible rootkit" alert is fired
>> >
>> > I am well aware that I do not have rootkits on these systems but am
>> > wondering what exactly inside ovirt is causing this to trigger? Or any
>> > ideas? Below is sample alert. All my google-fu turns up is that a
>> > process
>> > would have to **try** to hide itself from /proc, so curious what this
is
>> > inside ovirt. Thanks!
>> >
>> > -------------
>> >
>> > OSSEC HIDS Notification.
>> > 2017 Mar 20 11:54:47
>> >
>> > Received From: (ovirtnode2.mydomain.com2) any->rootcheck
>> > Rule: 510 fired (level 7) -> "Host-based anomaly detection event
>> > (rootcheck)."
>> > Portion of the log(s):
>> >
>> > Process '24574' hidden from /proc. Possible kernel level rootkit.
>>
>> What do you get from:
>>
>> ps -eLf | grep -w 24574
>>
>> Thanks,
>> --
>> Didi
>
>
--
Didi