2025年10月28日 星期二

Objective-C functions

 https://chatgpt.com/share/6900839d-a774-8000-aab9-4423e7e5453d

- (return_type) function_name:( argumentType1 )argumentName1 joiningArgument2:( argumentType2 )argumentName2 ... joiningArgumentn:( argumentTypen )argumentNamen; 以上是Objective-C的語法,請問joiningArgument和argumentName的差別?
ChatGPT 說:

非常好問題。這正是很多人學 Objective-C 一開始容易混淆的地方。
我幫你拆解這個語法一段一段說明:


🔹完整語法格式

- (return_type)function_name:(argumentType1)argumentName1 joiningArgument2:(argumentType2)argumentName2 joiningArgument3:(argumentType3)argumentName3;

🔸1️⃣ 「function_name」、「joiningArgument2」這些是 方法標籤(method labels)

也叫作「參數標籤」或「參數名的外部部分」。
它們是 方法名稱的一部分,用來讓呼叫時語意清楚。

👉 例如:

- (void)setFirstName:(NSString *)first lastName:(NSString *)last;

這裡 setFirstName:lastName: 合起來 是完整的方法名稱。
在呼叫時要這樣寫:

[self setFirstName:@"Simon" lastName:@"Chen"];
  • setFirstName: 是第一個標籤

  • lastName: 是第二個標籤


🔸2️⃣ 「argumentName」是 方法內部使用的變數名稱

也就是你在方法實作中能使用的變數名。

繼續上面的例子:

- (void)setFirstName:(NSString *)first lastName:(NSString *)last { NSLog(@"名字是 %@ %@", first, last); }

這裡 firstlast 是方法內部的參數變數。


🔸3️⃣ 整體對照表

部分角色範例用途
setFirstName第一個標籤(method name 起始)setFirstName:呼叫時顯示語意
lastName第二個標籤lastName:呼叫時顯示語意
first第一個參數變數名(NSString *)first方法內部使用
last第二個參數變數名(NSString *)last方法內部使用

🔸4️⃣ 實際例子:對照呼叫與定義

定義:

- (BOOL)checkStartButton:(NSString *)str1 joiningArgument2:(NSString *)str2;

呼叫:

[self checkStartButton:@"A" joiningArgument2:@"B"];

在方法內部:

- (BOOL)checkStartButton:(NSString *)str1 joiningArgument2:(NSString *)str2 { NSLog(@"第一個參數是%@,第二個是%@", str1, str2); return YES; }

總結一句話:

  • joiningArgument(或第一段的 function_name)是方法名稱的一部分,用來在呼叫時閱讀清楚

  • argumentName方法內部使用的參數變數


是否要我幫你畫出一個簡圖對照表(方法定義 vs 呼叫 vs 實際參數名稱)?那會一眼就懂。

你說:
可以只代入部份joiningArgument嗎?
ChatGPT 說:

非常好!這是學 Objective-C 方法呼叫時的關鍵理解之一。
答案是:

不可以只代入部分 joiningArgument


🔹為什麼不行?

在 Objective-C 裡,方法名稱本身就包含所有的 joiningArgument 標籤
也就是說:

- (void)checkStartButton:(NSString *)str1 joiningArgument2:(NSString *)str2;

這個方法的完整名稱(selector)其實是:

checkStartButton:joiningArgument2:

這整串名字是 一個完整的方法名稱,不是多個可選參數。


🔸如果你只寫:

[self checkStartButton:@"A"];

編譯器會報錯:

No known method for selector 'checkStartButton:'

因為「checkStartButton:」和「checkStartButton:joiningArgument2:」是完全不同的兩個方法名稱


🔸正確的呼叫方式:

[self checkStartButton:@"A" joiningArgument2:@"B"];

🔸延伸說明

Objective-C 沒有像 C++、Swift 那樣的「可選參數(optional parameters)」或「方法多載(overloading)」概念。
所以你要做出「可以只給一個參數或兩個參數」的效果,通常要:

方法 1:定義多個版本

- (void)checkStartButton:(NSString *)str1; - (void)checkStartButton:(NSString *)str1 joiningArgument2:(NSString *)str2;

方法 2:給第二個參數允許為 nil

- (void)checkStartButton:(NSString *)str1 joiningArgument2:(NSString * _Nullable)str2;

呼叫時可以傳 nil

[self checkStartButton:@"A" joiningArgument2:nil];

總結:

問題答案
可以只代入部分 joiningArgument 嗎?❌ 不行,因為每個 joiningArgument 都是方法名稱的一部分。
如果只要傳部分參數怎麼辦?✅ 定義多個方法,或讓後面的參數允許為 nil

要不要我幫你舉一個實際例子,展示「定義多個方法」與「允許 nil」的完整程式比較?

你說:
如果joiningArgument是空白,要如何呼叫該函式?
ChatGPT 說:

超棒的問題 👏,你問到的是 Objective-C 的「無命名(unnamed)參數標籤」的特殊情況。
我們來逐步看清楚這個情形。


🔹問題重現

假設你寫了這樣的 method:

- (void)checkStartButton:(NSString *)str1 :(NSString *)str2;

注意第二個參數前 沒有 joiningArgument 名稱
這在語法上是合法的,但代表:

  • 第二個參數沒有外部標籤(label)

  • 呼叫時要直接接著寫冒號與參數


🔹呼叫方式

正確呼叫應該是這樣:

[self checkStartButton:@"A" :@"B"];

✅ 第一個參數 "A" 對應 (NSString *)str1
✅ 第二個參數 "B" 對應 (NSString *)str2


🔹背後原理

在 Objective-C 裡:

  • 每個冒號 : 都對應一個參數

  • 如果該冒號前沒有命名的 joiningArgument,那呼叫時也就不需(也不能)寫任何名稱

所以以下是對應關係:

定義呼叫
- (void)setName:(NSString *)first last:(NSString *)last;[self setName:@"A" last:@"B"];
- (void)setName:(NSString *)first :(NSString *)last;[self setName:@"A" :@"B"];

🔹實務建議

雖然語法上沒錯,但在實務上:

🚫 不建議省略 joiningArgument。

因為:

  • 會讓呼叫時語意不清楚;

  • 很容易混淆哪個值對應哪個參數;

  • Xcode 會給出 “Missing argument label” 或警告。

建議保留清楚的參數名稱,例如:

- (void)checkStartButton:(NSString *)str1 joiningArgument2:(NSString *)str2;

✅ 總結

問題答案
joiningArgument 可以空白嗎?✅ 可以,語法允許。
如何呼叫?[self functionName:firstArg :secondArg];
建議做法?🚫 不建議省略,最好給每個參數一個標籤名稱。

要我幫你示範一個有「空白標籤」與「有命名標籤」的函式,並一起比對執行結果的 NSLog 輸出嗎?

2024年12月1日 星期日

韌體逆向工程!關閉筆電網路攝影機 LED 指示燈, 駭客可暗中監視你(轉)

 韌體逆向工程!關閉筆電網路攝影機 LED 指示燈, 駭客可暗中監視你

韌體逆向工程!關閉筆電網路攝影機 LED 指示燈, 駭客可暗中監視你

你是否曾擔心你的筆電Webcam在你不知情的情況下被開啟?你是否想完全掌控你的網路Webcam LED 指示燈?網路安全研究員安德烈·科諾瓦洛夫(Andrey Konovalov)近期揭露了一項技術,可以秘密關閉 ThinkPad X230 的網路Webcam LED 燈,同時還能遠端操控Webcam。這項技術涉及深入的韌體逆向工程和 USB fuzzing,在網路安全方面,fuzzing可用於測試網路協議、應用程式和嵌入式系統等各種目標的漏洞。它可以幫助識別可能被利用來破壞系統或竊取數據的錯誤和漏洞,這次測試突顯了網路Webcam安全性的潛在漏洞。

深入探索 USB Fuzzing

科諾瓦洛夫的研究始於一場漫長的機場等待。他決定利用這段時間,透過 USB fuzzing 技術測試他的 ThinkPad X230 網路Webcam。USB fuzzing 是一種測試技術,藉由發送大量異常的 USB 請求到裝置,試圖找出潛藏的漏洞。科諾瓦洛夫發現,X230 的網路Webcam可以透過 USB 進行韌體更新,意外的是,他的 fuzzer 卻破壞了Webcam的韌體,導致Webcam無法運作。

為了更深入研究,科諾瓦洛夫建立了一個「防磚」的測試環境。他購買了更多 X230 網路Webcam模組,並將 SPI 快閃記憶體晶片移到可拆卸的插槽中,以便在韌體損壞時能透過外部程式設計器重新刷新。

透過防磚機制,科諾瓦洛夫繼續進行 fuzzing,並發現了一些隱藏的 USB 供應商請求,這些請求允許他讀取和寫入網路Webcam的韌體。他仔細記錄了每個請求的功能,並發現他的 fuzzing意外觸發了韌體寫入請求,覆蓋了原始韌體,導致Webcam故障。

逆向工程與 LED 控制

科諾瓦洛夫接著開始逆向工程網路Webcam的韌體,試圖找出控制 LED 燈的方法。他發現 LED 燈連接到相機控制器的一個 GPIO pin 腳位,但控制該 pin 腳位的程式碼位於 Boot ROM 中,Boot ROM 無法透過 USB 覆蓋。

為了取得 Boot ROM 程式碼,科諾瓦洛夫開發了一種巧妙的技術,利用 USB 請求中的細微變化來取得 Boot ROM 的內容。他將程式碼注入到Webcam的韌體中,並根據 Boot ROM 的位元值來決定Webcam是否能成功枚舉。這個方法雖然緩慢,但確實有效。

科諾瓦洛夫更進一步最佳化了這個方法,利用 UVC 設定(如對比度和飽和度)來動態提供偏移量,大幅加快了 Boot ROM 洩漏的速度。

在取得 Boot ROM 後,Konovalov 進行了逆向工程,找到了處理 USB 供應商請求的程式碼,並確定了 USB 請求參數的 XDATA 地址。最後,他透過比較 LED 燈開啟和關閉時的記憶體狀態,找到了控制 LED 的記憶體映射地址。

對其他筆電的影響

科諾瓦洛夫的研究揭示了許多網路Webcam的安全性問題。他指出,只要 LED 燈的控制不與感光元件的電源綁定,就有可能透過軟體控制 LED 燈。他建議 OEM 廠商將 LED 燈的電源與感光元件的電源綁定,並實施更安全的韌體更新機制,例如數位簽章檢查,以防止惡意韌體覆蓋。

科諾瓦洛夫對 ThinkPad X230 網路Webcam的研究展示了 USB fuzzing 和韌體逆向工程的強大功能。他的發現提醒我們,即使是看似簡單的裝置,例如網路Webcam,也可能存在潛在的安全漏洞。隨著科技的進步,保護我們的數位隱私變得越來越重要,而這需要使用者、研究人員和 OEM 廠商的共同努力。

2024年11月14日 星期四

CPU or one without certain AVX may cause Illegal instruction (core dumped) while import insightface

>>> import insightface Illegal instruction (core dumped)
ChatGPT said:
ChatGPT
>>> import insightface
Illegal instruction (core dumped)
ChatGPT said:
ChatGPT

It seems like you're encountering an "Illegal instruction" error when trying to use the insightface library. This issue typically occurs when the CPU tries to execute an instruction that is not supported by the hardware, which could be due to a few potential causes. Here are some steps you can take to troubleshoot and resolve this:
1. Check CPU Architecture Compatibility

    Ensure that your CPU supports the instruction sets required by insightface. For example, if you're running on an older CPU or one without certain AVX or SSE instructions, the library might fail. insightface relies on certain optimizations that may not be compatible with older hardware.
    You can check your CPU's capabilities by running a command like lscpu (on Linux) to see supported features.

2. Reinstall insightface and Dependencies

Sometimes, the issue can be related to incorrect installation or missing dependencies. You can try reinstalling insightface and its dependencies:

pip uninstall insightface
pip install insightface

Also, make sure all other dependencies (e.g., mxnet, numpy) are up-to-date.
3. Check for the Correct Version of mxnet

insightface often relies on mxnet for its deep learning models. Ensure that the version of mxnet you are using is compatible with the version of insightface. Sometimes, using the wrong version can cause issues.

pip install mxnet==<compatible_version>

You can check the insightface documentation or the GitHub repository for version compatibility details.
4. Use a Virtual Environment

Sometimes, conflicting dependencies in your environment can cause issues. It might help to create a clean virtual environment and reinstall the necessary packages there:

python3 -m venv insightface_env
source insightface_env/bin/activate
pip install insightface

5. Check for System Updates

    Ensure your operating system and Python environment are fully up-to-date. Sometimes, kernel or system libraries can affect how certain instructions are handled.

6. Try Running on a Different Machine or Platform

If possible, test the library on a different machine or platform to see if the issue is related to your specific hardware or operating system.
7. Use Docker (Optional)

If you're working on a specific environment that may have compatibility issues, you can try running the library in a Docker container. There may be an official or community-supported Docker image that has the necessary dependencies preconfigured.

Let me know if you'd like more detailed instructions for any of these steps!
You said:
~/insightface$ lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         36 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  2
  On-line CPU(s) list:   0,1
Vendor ID:               GenuineIntel
  Model name:            Intel(R) Core(TM) i5-1038NG7 CPU @ 2.00GHz
    CPU family:          6
    Model:               126
    Thread(s) per core:  1
    Core(s) per socket:  2
    Socket(s):           1
    Stepping:            5
    BogoMIPS:            3993.60
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mc
                         a cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscal
                         l nx rdtscp lm constant_tsc nopl xtopology nonstop_tsc
                         cpuid tsc_known_freq pni pclmulqdq ssse3 cx16 pcid sse4
                         _1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes f1
                         6c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_
                         single pti fsgsbase tsc_adjust bmi1 smep bmi2 invpcid r
                         dseed adx smap clflushopt dtherm arat pln pts
Virtualization features:
  Hypervisor vendor:     KVM
  Virtualization type:   full
Caches (sum of all):     
  L1d:                   96 KiB (2 instances)
  L1i:                   64 KiB (2 instances)
  L2:                    1 MiB (2 instances)
  L3:                    6 MiB (1 instance)
NUMA:                    
  NUMA node(s):          1
  NUMA node0 CPU(s):     0,1
Vulnerabilities:         
  Itlb multihit:         KVM: Mitigation: VMX unsupported
  L1tf:                  Mitigation; PTE Inversion
  Mds:                   Vulnerable: Clear CPU buffers attempted, no microcode;
                         SMT Host state unknown
  Meltdown:              Mitigation; PTI
  Mmio stale data:       Vulnerable: Clear CPU buffers attempted, no microcode;
                         SMT Host state unknown
  Retbleed:              Vulnerable
  Spec store bypass:     Vulnerable
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer
                          sanitization
  Spectre v2:            Mitigation; Retpolines, STIBP disabled, RSB filling, PB
                         RSB-eIBRS Not affected
  Srbds:                 Unknown: Dependent on hypervisor status
  Tsx async abort:       Not affected
ChatGPT said:
ChatGPT

From the output of lscpu, it seems like your CPU is an Intel Core i5-1038NG7, which is relatively modern and supports several advanced instruction sets like SSE4.1, SSE4.2, and AES. However, it does not seem to support AVX or AVX2, which are often required by some deep learning libraries, including insightface when it's running on certain optimizations.
Key Points:

    AVX Support: Your CPU does not appear to support AVX (Advanced Vector Extensions) or AVX2 instructions, which are used by many modern machine learning libraries for improved performance, particularly for matrix operations in frameworks like mxnet and tensorflow.

    Virtualization: You're running this in a virtualized environment (KVM), which may add another layer of complexity, especially if the virtualization setup is not optimized to handle advanced CPU instructions.

Possible Solutions:

Here are some ways to resolve or work around the issue:
1. Use a CPU-Compatible Version of insightface

If you're running into issues because insightface is trying to use AVX or AVX2 instructions, you could try installing a version of mxnet (the deep learning framework insightface depends on) that does not rely on AVX/AVX2 optimizations.

Try installing mxnet with no AVX support:

pip install mxnet-cu102  # Or other mxnet versions without AVX instructions.

Alternatively, you could try using a mxnet version that is compiled for lower-level optimizations, or even compile it from source with custom flags that avoid AVX instructions.
2. Try Running on a Different Machine

Since your CPU does not support AVX or AVX2, you may experience performance issues or errors when running insightface on this hardware. If possible, try running it on a more modern machine with AVX support.
3. Disable AVX in Code (if possible)

If you are comfortable with modifying the code or the underlying library (e.g., mxnet), you can try disabling AVX-specific optimizations. This can be done by setting the environment variable MXNET_CPU_FLAGS or modifying the build process.

For example, you can try:

export MXNET_CPU_FLAGS=disable_avx

This tells mxnet to avoid AVX instructions when running. Check the mxnet or insightface documentation for more information about how to disable these optimizations.
4. Run in a Docker Container (if not already)

Running the code inside a container can sometimes avoid hardware-specific issues by ensuring that the environment is consistent across machines. There might be an official insightface Docker image or a community one that has been optimized for your setup.

Here's a basic setup to run insightface in Docker:

    Create a Dockerfile or use an existing one for insightface.
    Make sure to set up the environment correctly inside the container, possibly disabling AVX if needed.

5. Consider Using a Cloud GPU Instance

If this is a critical application and you're unable to resolve the issue on your current hardware, you could consider using a cloud instance (e.g., AWS EC2, Google Cloud, or Azure) with a more powerful CPU or even a GPU to run insightface without these hardware limitations.
Final Check: Reinstall the Library

After making adjustments to the environment or architecture, you can attempt to reinstall insightface:

pip uninstall insightface
pip install insightface

Let me know if you need more details or help with any of the steps above!
ChatGPT

It seems like you're encountering an "Illegal instruction" error when trying to use the insightface library. This issue typically occurs when the CPU tries to execute an instruction that is not supported by the hardware, which could be due to a few potential causes. Here are some steps you can take to troubleshoot and resolve this:

1. Check CPU Architecture Compatibility

  • Ensure that your CPU supports the instruction sets required by insightface. For example, if you're running on an older CPU or one without certain AVX or SSE instructions, the library might fail. insightface relies on certain optimizations that may not be compatible with older hardware.
  • You can check your CPU's capabilities by running a command like lscpu (on Linux) to see supported features.
ChatGPT
ChatGPT

2024年11月7日 星期四

Python的os.listdir("/")列出來的東西,其中.file, "."是啥意思?

 在 Python 中,os.listdir("/") 列出指定目錄(此處為根目錄 /)中的所有項目。對於你提到的:

  • .file:如果有這樣的名稱,它是檔案或目錄名稱中以 . 開頭的文件或目錄。以 . 開頭的檔案或目錄通常被稱為隱藏檔案,在很多作業系統中不會在預設狀態下顯示出來,因為它們通常是設定檔或系統檔。


有.開頭是隱藏檔案,但是沒.開頭也可能是隱藏檔案。

2023年2月4日 星期六

安裝Docker for Windows的設定(轉貼)

 

安裝 Docker for Windows

Docker Windows 的所需條件,自然就是 Windows 10 ,且必須為 Build 10586 以上版本, 而且必須啟用 Hyper-V 和容器功能;如下圖。

是低,您沒有看錯,基本上,到目前為止,Windows 10 上運行 Docker for Windows , 是依賴著 Hyper-V 的,而且是沒有 Native 模式的.



當啟動後,我們就可以到底下這個位置下載 Docker for Windows 會到這個位置下載的原因是因為,目前如果要 run Windows Container,只能下載 Beta 版本。

...

2022年7月14日 星期四

How to terminate a python subprocess launched with shell=True (轉貼)

I'm launching a subprocess with the following command:

p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)

However, when I try to kill using:

p.terminate()

or

p.kill()

The command keeps running in the background, so I was wondering how can I actually terminate the process.

Note that when I run the command with:

p = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE)

It does terminate successfully when issuing the p.terminate().

11 Answers

501

Use a process group so as to enable sending a signal to all the process in the groups. For that, you should attach a session id to the parent process of the spawned/child processes, which is a shell in your case. This will make it the group leader of the processes. So now, when a signal is sent to the process group leader, it's transmitted to all of the child processes of this group.

Here's the code:

import os
import signal
import subprocess

# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before  exec() to run the shell.
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE, 
                       shell=True, preexec_fn=os.setsid) 

os.killpg(os.getpgid(pro.pid), signal.SIGTERM)  # Send the signal to all the process groups